00:00:00.001 Started by upstream project "autotest-per-patch" build number 132558 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.179 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.271 > git --version # 'git version 2.39.2' 00:00:00.271 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.302 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.302 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.624 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.639 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.652 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.652 > git config core.sparsecheckout # timeout=10 00:00:07.665 > git read-tree -mu HEAD # timeout=10 00:00:07.682 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.706 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.706 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.799 [Pipeline] Start of Pipeline 00:00:07.812 [Pipeline] library 00:00:07.814 Loading library shm_lib@master 00:00:07.814 Library shm_lib@master is cached. Copying from home. 00:00:07.830 [Pipeline] node 00:00:22.835 Still waiting to schedule task 00:00:22.835 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:34.932 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:34.933 [Pipeline] { 00:02:34.948 [Pipeline] catchError 00:02:34.949 [Pipeline] { 00:02:34.963 [Pipeline] wrap 00:02:34.971 [Pipeline] { 00:02:34.977 [Pipeline] stage 00:02:34.978 [Pipeline] { (Prologue) 00:02:35.046 [Pipeline] echo 00:02:35.047 Node: VM-host-SM4 00:02:35.052 [Pipeline] cleanWs 00:02:35.072 [WS-CLEANUP] Deleting project workspace... 00:02:35.072 [WS-CLEANUP] Deferred wipeout is used... 00:02:35.082 [WS-CLEANUP] done 00:02:35.318 [Pipeline] setCustomBuildProperty 00:02:35.388 [Pipeline] httpRequest 00:02:35.706 [Pipeline] echo 00:02:35.708 Sorcerer 10.211.164.20 is alive 00:02:35.718 [Pipeline] retry 00:02:35.720 [Pipeline] { 00:02:35.734 [Pipeline] httpRequest 00:02:35.738 HttpMethod: GET 00:02:35.739 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:35.740 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:35.740 Response Code: HTTP/1.1 200 OK 00:02:35.741 Success: Status code 200 is in the accepted range: 200,404 00:02:35.741 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:35.888 [Pipeline] } 00:02:35.902 [Pipeline] // retry 00:02:35.909 [Pipeline] sh 00:02:36.189 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:36.203 [Pipeline] httpRequest 00:02:36.511 [Pipeline] echo 00:02:36.513 Sorcerer 10.211.164.20 is alive 00:02:36.523 [Pipeline] retry 00:02:36.525 [Pipeline] { 00:02:36.541 [Pipeline] httpRequest 00:02:36.545 HttpMethod: GET 00:02:36.546 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:36.546 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:36.547 Response Code: HTTP/1.1 200 OK 00:02:36.548 Success: Status code 200 is in the accepted range: 200,404 00:02:36.548 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:38.781 [Pipeline] } 00:02:38.800 [Pipeline] // retry 00:02:38.808 [Pipeline] sh 00:02:39.086 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:42.381 [Pipeline] sh 00:02:42.666 + git -C spdk log --oneline -n5 00:02:42.666 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:42.666 5592070b3 doc: update nvmf_tracing.md 00:02:42.666 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:02:42.666 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:02:42.666 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:02:42.680 [Pipeline] writeFile 00:02:42.691 [Pipeline] sh 00:02:42.965 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:42.978 [Pipeline] sh 00:02:43.256 + cat autorun-spdk.conf 00:02:43.256 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.256 SPDK_TEST_NVMF=1 00:02:43.256 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.256 SPDK_TEST_URING=1 00:02:43.256 SPDK_TEST_USDT=1 00:02:43.256 SPDK_RUN_UBSAN=1 00:02:43.256 NET_TYPE=virt 00:02:43.256 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.262 RUN_NIGHTLY=0 00:02:43.265 [Pipeline] } 00:02:43.279 [Pipeline] // stage 00:02:43.298 [Pipeline] stage 00:02:43.301 [Pipeline] { (Run VM) 00:02:43.314 [Pipeline] sh 00:02:43.592 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:43.592 + echo 'Start stage prepare_nvme.sh' 00:02:43.592 Start stage prepare_nvme.sh 00:02:43.592 + [[ -n 10 ]] 00:02:43.592 + disk_prefix=ex10 00:02:43.592 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:43.592 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:43.592 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:43.592 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.592 ++ SPDK_TEST_NVMF=1 00:02:43.592 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.592 ++ SPDK_TEST_URING=1 00:02:43.592 ++ SPDK_TEST_USDT=1 00:02:43.592 ++ SPDK_RUN_UBSAN=1 00:02:43.592 ++ NET_TYPE=virt 00:02:43.592 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.592 ++ RUN_NIGHTLY=0 00:02:43.592 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:43.592 + nvme_files=() 00:02:43.592 + declare -A nvme_files 00:02:43.593 + backend_dir=/var/lib/libvirt/images/backends 00:02:43.593 + nvme_files['nvme.img']=5G 00:02:43.593 + nvme_files['nvme-cmb.img']=5G 00:02:43.593 + nvme_files['nvme-multi0.img']=4G 00:02:43.593 + nvme_files['nvme-multi1.img']=4G 00:02:43.593 + nvme_files['nvme-multi2.img']=4G 00:02:43.593 + nvme_files['nvme-openstack.img']=8G 00:02:43.593 + nvme_files['nvme-zns.img']=5G 00:02:43.593 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:43.593 + (( SPDK_TEST_FTL == 1 )) 00:02:43.593 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:43.593 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:43.593 + for nvme in "${!nvme_files[@]}" 00:02:43.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:02:43.593 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.593 + for nvme in "${!nvme_files[@]}" 00:02:43.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:02:43.593 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:43.593 + for nvme in "${!nvme_files[@]}" 00:02:43.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:02:43.593 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:43.593 + for nvme in "${!nvme_files[@]}" 00:02:43.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:02:43.593 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:43.593 + for nvme in "${!nvme_files[@]}" 00:02:43.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:02:43.593 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.593 + for nvme in "${!nvme_files[@]}" 00:02:43.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:02:43.593 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.593 + for nvme in "${!nvme_files[@]}" 00:02:43.593 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:02:43.850 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:43.850 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:02:43.850 + echo 'End stage prepare_nvme.sh' 00:02:43.850 End stage prepare_nvme.sh 00:02:43.864 [Pipeline] sh 00:02:44.145 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:44.145 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -H -a -v -f fedora39 00:02:44.145 00:02:44.145 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:44.145 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:44.145 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:44.145 HELP=0 00:02:44.145 DRY_RUN=0 00:02:44.145 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img, 00:02:44.145 NVME_DISKS_TYPE=nvme,nvme, 00:02:44.145 NVME_AUTO_CREATE=0 00:02:44.145 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img, 00:02:44.145 NVME_CMB=,, 00:02:44.145 NVME_PMR=,, 00:02:44.145 NVME_ZNS=,, 00:02:44.145 NVME_MS=,, 00:02:44.145 NVME_FDP=,, 00:02:44.145 SPDK_VAGRANT_DISTRO=fedora39 00:02:44.145 SPDK_VAGRANT_VMCPU=10 00:02:44.145 SPDK_VAGRANT_VMRAM=12288 00:02:44.145 SPDK_VAGRANT_PROVIDER=libvirt 00:02:44.145 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:44.145 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:44.145 SPDK_OPENSTACK_NETWORK=0 00:02:44.145 VAGRANT_PACKAGE_BOX=0 00:02:44.145 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:44.145 FORCE_DISTRO=true 00:02:44.145 VAGRANT_BOX_VERSION= 00:02:44.145 EXTRA_VAGRANTFILES= 00:02:44.145 NIC_MODEL=e1000 00:02:44.145 00:02:44.145 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:44.145 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:48.334 Bringing machine 'default' up with 'libvirt' provider... 00:02:48.592 ==> default: Creating image (snapshot of base box volume). 00:02:48.849 ==> default: Creating domain with the following settings... 00:02:48.849 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732653043_2a40f5675fb85c8796dc 00:02:48.849 ==> default: -- Domain type: kvm 00:02:48.849 ==> default: -- Cpus: 10 00:02:48.849 ==> default: -- Feature: acpi 00:02:48.849 ==> default: -- Feature: apic 00:02:48.849 ==> default: -- Feature: pae 00:02:48.849 ==> default: -- Memory: 12288M 00:02:48.849 ==> default: -- Memory Backing: hugepages: 00:02:48.849 ==> default: -- Management MAC: 00:02:48.849 ==> default: -- Loader: 00:02:48.849 ==> default: -- Nvram: 00:02:48.849 ==> default: -- Base box: spdk/fedora39 00:02:48.849 ==> default: -- Storage pool: default 00:02:48.849 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732653043_2a40f5675fb85c8796dc.img (20G) 00:02:48.849 ==> default: -- Volume Cache: default 00:02:48.849 ==> default: -- Kernel: 00:02:48.849 ==> default: -- Initrd: 00:02:48.849 ==> default: -- Graphics Type: vnc 00:02:48.849 ==> default: -- Graphics Port: -1 00:02:48.849 ==> default: -- Graphics IP: 127.0.0.1 00:02:48.849 ==> default: -- Graphics Password: Not defined 00:02:48.849 ==> default: -- Video Type: cirrus 00:02:48.849 ==> default: -- Video VRAM: 9216 00:02:48.849 ==> default: -- Sound Type: 00:02:48.849 ==> default: -- Keymap: en-us 00:02:48.849 ==> default: -- TPM Path: 00:02:48.849 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:48.849 ==> default: -- Command line args: 00:02:48.849 ==> default: -> value=-device, 00:02:48.849 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:48.849 ==> default: -> value=-drive, 00:02:48.849 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-0-drive0, 00:02:48.849 ==> default: -> value=-device, 00:02:48.849 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:48.849 ==> default: -> value=-device, 00:02:48.849 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:48.849 ==> default: -> value=-drive, 00:02:48.849 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:48.849 ==> default: -> value=-device, 00:02:48.849 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:48.850 ==> default: -> value=-drive, 00:02:48.850 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:48.850 ==> default: -> value=-device, 00:02:48.850 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:48.850 ==> default: -> value=-drive, 00:02:48.850 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:48.850 ==> default: -> value=-device, 00:02:48.850 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:49.108 ==> default: Creating shared folders metadata... 00:02:49.108 ==> default: Starting domain. 00:02:51.032 ==> default: Waiting for domain to get an IP address... 00:03:09.122 ==> default: Waiting for SSH to become available... 00:03:11.026 ==> default: Configuring and enabling network interfaces... 00:03:16.287 default: SSH address: 192.168.121.39:22 00:03:16.287 default: SSH username: vagrant 00:03:16.287 default: SSH auth method: private key 00:03:17.664 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:27.653 ==> default: Mounting SSHFS shared folder... 00:03:27.911 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:27.911 ==> default: Checking Mount.. 00:03:29.290 ==> default: Folder Successfully Mounted! 00:03:29.290 ==> default: Running provisioner: file... 00:03:30.226 default: ~/.gitconfig => .gitconfig 00:03:30.793 00:03:30.793 SUCCESS! 00:03:30.793 00:03:30.793 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:30.793 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:30.793 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:30.793 00:03:30.801 [Pipeline] } 00:03:30.818 [Pipeline] // stage 00:03:30.827 [Pipeline] dir 00:03:30.827 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:03:30.829 [Pipeline] { 00:03:30.841 [Pipeline] catchError 00:03:30.844 [Pipeline] { 00:03:30.858 [Pipeline] sh 00:03:31.136 + vagrant ssh-config --host vagrant 00:03:31.136 + sed -ne /^Host/,$p 00:03:31.136 + tee ssh_conf 00:03:35.399 Host vagrant 00:03:35.399 HostName 192.168.121.39 00:03:35.399 User vagrant 00:03:35.399 Port 22 00:03:35.399 UserKnownHostsFile /dev/null 00:03:35.399 StrictHostKeyChecking no 00:03:35.399 PasswordAuthentication no 00:03:35.400 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:35.400 IdentitiesOnly yes 00:03:35.400 LogLevel FATAL 00:03:35.400 ForwardAgent yes 00:03:35.400 ForwardX11 yes 00:03:35.400 00:03:35.414 [Pipeline] withEnv 00:03:35.417 [Pipeline] { 00:03:35.432 [Pipeline] sh 00:03:35.713 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:35.713 source /etc/os-release 00:03:35.713 [[ -e /image.version ]] && img=$(< /image.version) 00:03:35.713 # Minimal, systemd-like check. 00:03:35.713 if [[ -e /.dockerenv ]]; then 00:03:35.713 # Clear garbage from the node's name: 00:03:35.713 # agt-er_autotest_547-896 -> autotest_547-896 00:03:35.713 # $HOSTNAME is the actual container id 00:03:35.713 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:35.713 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:35.713 # We can assume this is a mount from a host where container is running, 00:03:35.713 # so fetch its hostname to easily identify the target swarm worker. 00:03:35.713 container="$(< /etc/hostname) ($agent)" 00:03:35.713 else 00:03:35.713 # Fallback 00:03:35.713 container=$agent 00:03:35.713 fi 00:03:35.713 fi 00:03:35.714 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:35.714 00:03:35.983 [Pipeline] } 00:03:36.000 [Pipeline] // withEnv 00:03:36.010 [Pipeline] setCustomBuildProperty 00:03:36.025 [Pipeline] stage 00:03:36.027 [Pipeline] { (Tests) 00:03:36.045 [Pipeline] sh 00:03:36.324 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:36.597 [Pipeline] sh 00:03:36.875 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:37.147 [Pipeline] timeout 00:03:37.147 Timeout set to expire in 1 hr 0 min 00:03:37.149 [Pipeline] { 00:03:37.166 [Pipeline] sh 00:03:37.444 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:38.010 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:03:38.022 [Pipeline] sh 00:03:38.336 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:38.615 [Pipeline] sh 00:03:38.894 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:39.169 [Pipeline] sh 00:03:39.450 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:39.709 ++ readlink -f spdk_repo 00:03:39.709 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:39.709 + [[ -n /home/vagrant/spdk_repo ]] 00:03:39.709 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:39.709 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:39.709 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:39.709 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:39.709 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:39.709 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:39.709 + cd /home/vagrant/spdk_repo 00:03:39.709 + source /etc/os-release 00:03:39.709 ++ NAME='Fedora Linux' 00:03:39.709 ++ VERSION='39 (Cloud Edition)' 00:03:39.709 ++ ID=fedora 00:03:39.709 ++ VERSION_ID=39 00:03:39.709 ++ VERSION_CODENAME= 00:03:39.709 ++ PLATFORM_ID=platform:f39 00:03:39.709 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:39.709 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:39.709 ++ LOGO=fedora-logo-icon 00:03:39.709 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:39.709 ++ HOME_URL=https://fedoraproject.org/ 00:03:39.709 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:39.709 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:39.709 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:39.709 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:39.709 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:39.709 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:39.709 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:39.709 ++ SUPPORT_END=2024-11-12 00:03:39.709 ++ VARIANT='Cloud Edition' 00:03:39.709 ++ VARIANT_ID=cloud 00:03:39.709 + uname -a 00:03:39.709 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:39.709 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:40.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.274 Hugepages 00:03:40.274 node hugesize free / total 00:03:40.274 node0 1048576kB 0 / 0 00:03:40.274 node0 2048kB 0 / 0 00:03:40.274 00:03:40.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.274 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:40.274 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:40.274 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:40.274 + rm -f /tmp/spdk-ld-path 00:03:40.274 + source autorun-spdk.conf 00:03:40.274 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:40.274 ++ SPDK_TEST_NVMF=1 00:03:40.274 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:40.274 ++ SPDK_TEST_URING=1 00:03:40.274 ++ SPDK_TEST_USDT=1 00:03:40.274 ++ SPDK_RUN_UBSAN=1 00:03:40.274 ++ NET_TYPE=virt 00:03:40.274 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:40.274 ++ RUN_NIGHTLY=0 00:03:40.274 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:40.274 + [[ -n '' ]] 00:03:40.274 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:40.274 + for M in /var/spdk/build-*-manifest.txt 00:03:40.274 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:40.274 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:40.274 + for M in /var/spdk/build-*-manifest.txt 00:03:40.274 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:40.274 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:40.274 + for M in /var/spdk/build-*-manifest.txt 00:03:40.274 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:40.274 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:40.274 ++ uname 00:03:40.274 + [[ Linux == \L\i\n\u\x ]] 00:03:40.274 + sudo dmesg -T 00:03:40.274 + sudo dmesg --clear 00:03:40.274 + dmesg_pid=5258 00:03:40.274 + sudo dmesg -Tw 00:03:40.274 + [[ Fedora Linux == FreeBSD ]] 00:03:40.274 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:40.274 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:40.274 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:40.274 + [[ -x /usr/src/fio-static/fio ]] 00:03:40.274 + export FIO_BIN=/usr/src/fio-static/fio 00:03:40.274 + FIO_BIN=/usr/src/fio-static/fio 00:03:40.274 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:40.275 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:40.275 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:40.275 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:40.275 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:40.275 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:40.275 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:40.275 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:40.275 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:40.533 20:31:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:40.533 20:31:35 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:40.533 20:31:35 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:40.533 20:31:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:40.533 20:31:35 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:40.533 20:31:35 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:40.533 20:31:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.533 20:31:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:40.533 20:31:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:40.533 20:31:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.533 20:31:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.533 20:31:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.533 20:31:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.533 20:31:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.533 20:31:35 -- paths/export.sh@5 -- $ export PATH 00:03:40.533 20:31:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.533 20:31:35 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:40.533 20:31:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:40.533 20:31:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732653095.XXXXXX 00:03:40.533 20:31:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732653095.LcFluG 00:03:40.533 20:31:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:40.533 20:31:35 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:40.533 20:31:35 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:40.533 20:31:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:40.533 20:31:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:40.533 20:31:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:40.533 20:31:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:40.533 20:31:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.533 20:31:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:40.533 20:31:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:40.533 20:31:35 -- pm/common@17 -- $ local monitor 00:03:40.533 20:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.533 20:31:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.533 20:31:35 -- pm/common@25 -- $ sleep 1 00:03:40.533 20:31:35 -- pm/common@21 -- $ date +%s 00:03:40.533 20:31:35 -- pm/common@21 -- $ date +%s 00:03:40.533 20:31:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732653095 00:03:40.533 20:31:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732653095 00:03:40.533 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732653095_collect-vmstat.pm.log 00:03:40.533 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732653095_collect-cpu-load.pm.log 00:03:41.466 20:31:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:41.466 20:31:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:41.466 20:31:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:41.466 20:31:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:41.466 20:31:36 -- spdk/autobuild.sh@16 -- $ date -u 00:03:41.466 Tue Nov 26 08:31:36 PM UTC 2024 00:03:41.466 20:31:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:41.466 v25.01-pre-271-g2f2acf4eb 00:03:41.466 20:31:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:41.466 20:31:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:41.466 20:31:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:41.466 20:31:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:41.466 20:31:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:41.466 20:31:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:41.466 ************************************ 00:03:41.466 START TEST ubsan 00:03:41.466 ************************************ 00:03:41.466 using ubsan 00:03:41.466 20:31:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:41.466 00:03:41.466 real 0m0.001s 00:03:41.466 user 0m0.000s 00:03:41.466 sys 0m0.000s 00:03:41.466 20:31:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:41.466 20:31:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:41.466 ************************************ 00:03:41.466 END TEST ubsan 00:03:41.466 ************************************ 00:03:41.725 20:31:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:41.725 20:31:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:41.725 20:31:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:41.725 20:31:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:41.725 20:31:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:41.725 20:31:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:41.725 20:31:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:41.725 20:31:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:41.725 20:31:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:41.725 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:41.725 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:42.291 Using 'verbs' RDMA provider 00:03:58.540 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:13.461 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:13.461 Creating mk/config.mk...done. 00:04:13.461 Creating mk/cc.flags.mk...done. 00:04:13.461 Type 'make' to build. 00:04:13.461 20:32:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:13.461 20:32:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:13.461 20:32:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:13.461 20:32:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.461 ************************************ 00:04:13.461 START TEST make 00:04:13.461 ************************************ 00:04:13.461 20:32:07 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:13.461 make[1]: Nothing to be done for 'all'. 00:04:25.667 The Meson build system 00:04:25.667 Version: 1.5.0 00:04:25.667 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:25.667 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:25.667 Build type: native build 00:04:25.667 Program cat found: YES (/usr/bin/cat) 00:04:25.667 Project name: DPDK 00:04:25.667 Project version: 24.03.0 00:04:25.667 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:25.667 C linker for the host machine: cc ld.bfd 2.40-14 00:04:25.667 Host machine cpu family: x86_64 00:04:25.667 Host machine cpu: x86_64 00:04:25.667 Message: ## Building in Developer Mode ## 00:04:25.667 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:25.667 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:25.667 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:25.667 Program python3 found: YES (/usr/bin/python3) 00:04:25.667 Program cat found: YES (/usr/bin/cat) 00:04:25.667 Compiler for C supports arguments -march=native: YES 00:04:25.667 Checking for size of "void *" : 8 00:04:25.667 Checking for size of "void *" : 8 (cached) 00:04:25.667 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:25.667 Library m found: YES 00:04:25.667 Library numa found: YES 00:04:25.667 Has header "numaif.h" : YES 00:04:25.667 Library fdt found: NO 00:04:25.667 Library execinfo found: NO 00:04:25.667 Has header "execinfo.h" : YES 00:04:25.667 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:25.667 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:25.667 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:25.667 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:25.667 Run-time dependency openssl found: YES 3.1.1 00:04:25.667 Run-time dependency libpcap found: YES 1.10.4 00:04:25.667 Has header "pcap.h" with dependency libpcap: YES 00:04:25.667 Compiler for C supports arguments -Wcast-qual: YES 00:04:25.667 Compiler for C supports arguments -Wdeprecated: YES 00:04:25.667 Compiler for C supports arguments -Wformat: YES 00:04:25.667 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:25.667 Compiler for C supports arguments -Wformat-security: NO 00:04:25.667 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:25.667 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:25.667 Compiler for C supports arguments -Wnested-externs: YES 00:04:25.667 Compiler for C supports arguments -Wold-style-definition: YES 00:04:25.667 Compiler for C supports arguments -Wpointer-arith: YES 00:04:25.667 Compiler for C supports arguments -Wsign-compare: YES 00:04:25.667 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:25.667 Compiler for C supports arguments -Wundef: YES 00:04:25.667 Compiler for C supports arguments -Wwrite-strings: YES 00:04:25.667 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:25.667 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:25.667 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:25.667 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:25.667 Program objdump found: YES (/usr/bin/objdump) 00:04:25.667 Compiler for C supports arguments -mavx512f: YES 00:04:25.667 Checking if "AVX512 checking" compiles: YES 00:04:25.667 Fetching value of define "__SSE4_2__" : 1 00:04:25.667 Fetching value of define "__AES__" : 1 00:04:25.667 Fetching value of define "__AVX__" : 1 00:04:25.667 Fetching value of define "__AVX2__" : 1 00:04:25.667 Fetching value of define "__AVX512BW__" : 1 00:04:25.667 Fetching value of define "__AVX512CD__" : 1 00:04:25.667 Fetching value of define "__AVX512DQ__" : 1 00:04:25.667 Fetching value of define "__AVX512F__" : 1 00:04:25.667 Fetching value of define "__AVX512VL__" : 1 00:04:25.667 Fetching value of define "__PCLMUL__" : 1 00:04:25.667 Fetching value of define "__RDRND__" : 1 00:04:25.667 Fetching value of define "__RDSEED__" : 1 00:04:25.667 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:25.667 Fetching value of define "__znver1__" : (undefined) 00:04:25.667 Fetching value of define "__znver2__" : (undefined) 00:04:25.667 Fetching value of define "__znver3__" : (undefined) 00:04:25.667 Fetching value of define "__znver4__" : (undefined) 00:04:25.667 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:25.667 Message: lib/log: Defining dependency "log" 00:04:25.667 Message: lib/kvargs: Defining dependency "kvargs" 00:04:25.667 Message: lib/telemetry: Defining dependency "telemetry" 00:04:25.667 Checking for function "getentropy" : NO 00:04:25.667 Message: lib/eal: Defining dependency "eal" 00:04:25.667 Message: lib/ring: Defining dependency "ring" 00:04:25.667 Message: lib/rcu: Defining dependency "rcu" 00:04:25.667 Message: lib/mempool: Defining dependency "mempool" 00:04:25.667 Message: lib/mbuf: Defining dependency "mbuf" 00:04:25.667 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:25.667 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:25.667 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:25.667 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:25.667 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:25.667 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:25.667 Compiler for C supports arguments -mpclmul: YES 00:04:25.667 Compiler for C supports arguments -maes: YES 00:04:25.667 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:25.667 Compiler for C supports arguments -mavx512bw: YES 00:04:25.667 Compiler for C supports arguments -mavx512dq: YES 00:04:25.667 Compiler for C supports arguments -mavx512vl: YES 00:04:25.667 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:25.667 Compiler for C supports arguments -mavx2: YES 00:04:25.667 Compiler for C supports arguments -mavx: YES 00:04:25.667 Message: lib/net: Defining dependency "net" 00:04:25.667 Message: lib/meter: Defining dependency "meter" 00:04:25.667 Message: lib/ethdev: Defining dependency "ethdev" 00:04:25.667 Message: lib/pci: Defining dependency "pci" 00:04:25.667 Message: lib/cmdline: Defining dependency "cmdline" 00:04:25.667 Message: lib/hash: Defining dependency "hash" 00:04:25.667 Message: lib/timer: Defining dependency "timer" 00:04:25.667 Message: lib/compressdev: Defining dependency "compressdev" 00:04:25.667 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:25.667 Message: lib/dmadev: Defining dependency "dmadev" 00:04:25.667 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:25.667 Message: lib/power: Defining dependency "power" 00:04:25.667 Message: lib/reorder: Defining dependency "reorder" 00:04:25.667 Message: lib/security: Defining dependency "security" 00:04:25.667 Has header "linux/userfaultfd.h" : YES 00:04:25.667 Has header "linux/vduse.h" : YES 00:04:25.667 Message: lib/vhost: Defining dependency "vhost" 00:04:25.667 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:25.667 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:25.667 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:25.667 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:25.667 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:25.667 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:25.667 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:25.667 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:25.668 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:25.668 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:25.668 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:25.668 Configuring doxy-api-html.conf using configuration 00:04:25.668 Configuring doxy-api-man.conf using configuration 00:04:25.668 Program mandb found: YES (/usr/bin/mandb) 00:04:25.668 Program sphinx-build found: NO 00:04:25.668 Configuring rte_build_config.h using configuration 00:04:25.668 Message: 00:04:25.668 ================= 00:04:25.668 Applications Enabled 00:04:25.668 ================= 00:04:25.668 00:04:25.668 apps: 00:04:25.668 00:04:25.668 00:04:25.668 Message: 00:04:25.668 ================= 00:04:25.668 Libraries Enabled 00:04:25.668 ================= 00:04:25.668 00:04:25.668 libs: 00:04:25.668 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:25.668 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:25.668 cryptodev, dmadev, power, reorder, security, vhost, 00:04:25.668 00:04:25.668 Message: 00:04:25.668 =============== 00:04:25.668 Drivers Enabled 00:04:25.668 =============== 00:04:25.668 00:04:25.668 common: 00:04:25.668 00:04:25.668 bus: 00:04:25.668 pci, vdev, 00:04:25.668 mempool: 00:04:25.668 ring, 00:04:25.668 dma: 00:04:25.668 00:04:25.668 net: 00:04:25.668 00:04:25.668 crypto: 00:04:25.668 00:04:25.668 compress: 00:04:25.668 00:04:25.668 vdpa: 00:04:25.668 00:04:25.668 00:04:25.668 Message: 00:04:25.668 ================= 00:04:25.668 Content Skipped 00:04:25.668 ================= 00:04:25.668 00:04:25.668 apps: 00:04:25.668 dumpcap: explicitly disabled via build config 00:04:25.668 graph: explicitly disabled via build config 00:04:25.668 pdump: explicitly disabled via build config 00:04:25.668 proc-info: explicitly disabled via build config 00:04:25.668 test-acl: explicitly disabled via build config 00:04:25.668 test-bbdev: explicitly disabled via build config 00:04:25.668 test-cmdline: explicitly disabled via build config 00:04:25.668 test-compress-perf: explicitly disabled via build config 00:04:25.668 test-crypto-perf: explicitly disabled via build config 00:04:25.668 test-dma-perf: explicitly disabled via build config 00:04:25.668 test-eventdev: explicitly disabled via build config 00:04:25.668 test-fib: explicitly disabled via build config 00:04:25.668 test-flow-perf: explicitly disabled via build config 00:04:25.668 test-gpudev: explicitly disabled via build config 00:04:25.668 test-mldev: explicitly disabled via build config 00:04:25.668 test-pipeline: explicitly disabled via build config 00:04:25.668 test-pmd: explicitly disabled via build config 00:04:25.668 test-regex: explicitly disabled via build config 00:04:25.668 test-sad: explicitly disabled via build config 00:04:25.668 test-security-perf: explicitly disabled via build config 00:04:25.668 00:04:25.668 libs: 00:04:25.668 argparse: explicitly disabled via build config 00:04:25.668 metrics: explicitly disabled via build config 00:04:25.668 acl: explicitly disabled via build config 00:04:25.668 bbdev: explicitly disabled via build config 00:04:25.668 bitratestats: explicitly disabled via build config 00:04:25.668 bpf: explicitly disabled via build config 00:04:25.668 cfgfile: explicitly disabled via build config 00:04:25.668 distributor: explicitly disabled via build config 00:04:25.668 efd: explicitly disabled via build config 00:04:25.668 eventdev: explicitly disabled via build config 00:04:25.668 dispatcher: explicitly disabled via build config 00:04:25.668 gpudev: explicitly disabled via build config 00:04:25.668 gro: explicitly disabled via build config 00:04:25.668 gso: explicitly disabled via build config 00:04:25.668 ip_frag: explicitly disabled via build config 00:04:25.668 jobstats: explicitly disabled via build config 00:04:25.668 latencystats: explicitly disabled via build config 00:04:25.668 lpm: explicitly disabled via build config 00:04:25.668 member: explicitly disabled via build config 00:04:25.668 pcapng: explicitly disabled via build config 00:04:25.668 rawdev: explicitly disabled via build config 00:04:25.668 regexdev: explicitly disabled via build config 00:04:25.668 mldev: explicitly disabled via build config 00:04:25.668 rib: explicitly disabled via build config 00:04:25.668 sched: explicitly disabled via build config 00:04:25.668 stack: explicitly disabled via build config 00:04:25.668 ipsec: explicitly disabled via build config 00:04:25.668 pdcp: explicitly disabled via build config 00:04:25.668 fib: explicitly disabled via build config 00:04:25.668 port: explicitly disabled via build config 00:04:25.668 pdump: explicitly disabled via build config 00:04:25.668 table: explicitly disabled via build config 00:04:25.668 pipeline: explicitly disabled via build config 00:04:25.668 graph: explicitly disabled via build config 00:04:25.668 node: explicitly disabled via build config 00:04:25.668 00:04:25.668 drivers: 00:04:25.668 common/cpt: not in enabled drivers build config 00:04:25.668 common/dpaax: not in enabled drivers build config 00:04:25.668 common/iavf: not in enabled drivers build config 00:04:25.668 common/idpf: not in enabled drivers build config 00:04:25.668 common/ionic: not in enabled drivers build config 00:04:25.668 common/mvep: not in enabled drivers build config 00:04:25.668 common/octeontx: not in enabled drivers build config 00:04:25.668 bus/auxiliary: not in enabled drivers build config 00:04:25.668 bus/cdx: not in enabled drivers build config 00:04:25.668 bus/dpaa: not in enabled drivers build config 00:04:25.668 bus/fslmc: not in enabled drivers build config 00:04:25.668 bus/ifpga: not in enabled drivers build config 00:04:25.668 bus/platform: not in enabled drivers build config 00:04:25.668 bus/uacce: not in enabled drivers build config 00:04:25.668 bus/vmbus: not in enabled drivers build config 00:04:25.668 common/cnxk: not in enabled drivers build config 00:04:25.668 common/mlx5: not in enabled drivers build config 00:04:25.668 common/nfp: not in enabled drivers build config 00:04:25.668 common/nitrox: not in enabled drivers build config 00:04:25.668 common/qat: not in enabled drivers build config 00:04:25.668 common/sfc_efx: not in enabled drivers build config 00:04:25.668 mempool/bucket: not in enabled drivers build config 00:04:25.668 mempool/cnxk: not in enabled drivers build config 00:04:25.668 mempool/dpaa: not in enabled drivers build config 00:04:25.668 mempool/dpaa2: not in enabled drivers build config 00:04:25.668 mempool/octeontx: not in enabled drivers build config 00:04:25.668 mempool/stack: not in enabled drivers build config 00:04:25.668 dma/cnxk: not in enabled drivers build config 00:04:25.668 dma/dpaa: not in enabled drivers build config 00:04:25.668 dma/dpaa2: not in enabled drivers build config 00:04:25.668 dma/hisilicon: not in enabled drivers build config 00:04:25.668 dma/idxd: not in enabled drivers build config 00:04:25.668 dma/ioat: not in enabled drivers build config 00:04:25.668 dma/skeleton: not in enabled drivers build config 00:04:25.668 net/af_packet: not in enabled drivers build config 00:04:25.668 net/af_xdp: not in enabled drivers build config 00:04:25.668 net/ark: not in enabled drivers build config 00:04:25.668 net/atlantic: not in enabled drivers build config 00:04:25.668 net/avp: not in enabled drivers build config 00:04:25.668 net/axgbe: not in enabled drivers build config 00:04:25.668 net/bnx2x: not in enabled drivers build config 00:04:25.668 net/bnxt: not in enabled drivers build config 00:04:25.668 net/bonding: not in enabled drivers build config 00:04:25.668 net/cnxk: not in enabled drivers build config 00:04:25.668 net/cpfl: not in enabled drivers build config 00:04:25.668 net/cxgbe: not in enabled drivers build config 00:04:25.668 net/dpaa: not in enabled drivers build config 00:04:25.668 net/dpaa2: not in enabled drivers build config 00:04:25.668 net/e1000: not in enabled drivers build config 00:04:25.668 net/ena: not in enabled drivers build config 00:04:25.668 net/enetc: not in enabled drivers build config 00:04:25.668 net/enetfec: not in enabled drivers build config 00:04:25.668 net/enic: not in enabled drivers build config 00:04:25.668 net/failsafe: not in enabled drivers build config 00:04:25.668 net/fm10k: not in enabled drivers build config 00:04:25.668 net/gve: not in enabled drivers build config 00:04:25.668 net/hinic: not in enabled drivers build config 00:04:25.668 net/hns3: not in enabled drivers build config 00:04:25.668 net/i40e: not in enabled drivers build config 00:04:25.668 net/iavf: not in enabled drivers build config 00:04:25.668 net/ice: not in enabled drivers build config 00:04:25.668 net/idpf: not in enabled drivers build config 00:04:25.668 net/igc: not in enabled drivers build config 00:04:25.668 net/ionic: not in enabled drivers build config 00:04:25.668 net/ipn3ke: not in enabled drivers build config 00:04:25.668 net/ixgbe: not in enabled drivers build config 00:04:25.668 net/mana: not in enabled drivers build config 00:04:25.668 net/memif: not in enabled drivers build config 00:04:25.668 net/mlx4: not in enabled drivers build config 00:04:25.668 net/mlx5: not in enabled drivers build config 00:04:25.668 net/mvneta: not in enabled drivers build config 00:04:25.668 net/mvpp2: not in enabled drivers build config 00:04:25.668 net/netvsc: not in enabled drivers build config 00:04:25.668 net/nfb: not in enabled drivers build config 00:04:25.668 net/nfp: not in enabled drivers build config 00:04:25.669 net/ngbe: not in enabled drivers build config 00:04:25.669 net/null: not in enabled drivers build config 00:04:25.669 net/octeontx: not in enabled drivers build config 00:04:25.669 net/octeon_ep: not in enabled drivers build config 00:04:25.669 net/pcap: not in enabled drivers build config 00:04:25.669 net/pfe: not in enabled drivers build config 00:04:25.669 net/qede: not in enabled drivers build config 00:04:25.669 net/ring: not in enabled drivers build config 00:04:25.669 net/sfc: not in enabled drivers build config 00:04:25.669 net/softnic: not in enabled drivers build config 00:04:25.669 net/tap: not in enabled drivers build config 00:04:25.669 net/thunderx: not in enabled drivers build config 00:04:25.669 net/txgbe: not in enabled drivers build config 00:04:25.669 net/vdev_netvsc: not in enabled drivers build config 00:04:25.669 net/vhost: not in enabled drivers build config 00:04:25.669 net/virtio: not in enabled drivers build config 00:04:25.669 net/vmxnet3: not in enabled drivers build config 00:04:25.669 raw/*: missing internal dependency, "rawdev" 00:04:25.669 crypto/armv8: not in enabled drivers build config 00:04:25.669 crypto/bcmfs: not in enabled drivers build config 00:04:25.669 crypto/caam_jr: not in enabled drivers build config 00:04:25.669 crypto/ccp: not in enabled drivers build config 00:04:25.669 crypto/cnxk: not in enabled drivers build config 00:04:25.669 crypto/dpaa_sec: not in enabled drivers build config 00:04:25.669 crypto/dpaa2_sec: not in enabled drivers build config 00:04:25.669 crypto/ipsec_mb: not in enabled drivers build config 00:04:25.669 crypto/mlx5: not in enabled drivers build config 00:04:25.669 crypto/mvsam: not in enabled drivers build config 00:04:25.669 crypto/nitrox: not in enabled drivers build config 00:04:25.669 crypto/null: not in enabled drivers build config 00:04:25.669 crypto/octeontx: not in enabled drivers build config 00:04:25.669 crypto/openssl: not in enabled drivers build config 00:04:25.669 crypto/scheduler: not in enabled drivers build config 00:04:25.669 crypto/uadk: not in enabled drivers build config 00:04:25.669 crypto/virtio: not in enabled drivers build config 00:04:25.669 compress/isal: not in enabled drivers build config 00:04:25.669 compress/mlx5: not in enabled drivers build config 00:04:25.669 compress/nitrox: not in enabled drivers build config 00:04:25.669 compress/octeontx: not in enabled drivers build config 00:04:25.669 compress/zlib: not in enabled drivers build config 00:04:25.669 regex/*: missing internal dependency, "regexdev" 00:04:25.669 ml/*: missing internal dependency, "mldev" 00:04:25.669 vdpa/ifc: not in enabled drivers build config 00:04:25.669 vdpa/mlx5: not in enabled drivers build config 00:04:25.669 vdpa/nfp: not in enabled drivers build config 00:04:25.669 vdpa/sfc: not in enabled drivers build config 00:04:25.669 event/*: missing internal dependency, "eventdev" 00:04:25.669 baseband/*: missing internal dependency, "bbdev" 00:04:25.669 gpu/*: missing internal dependency, "gpudev" 00:04:25.669 00:04:25.669 00:04:25.669 Build targets in project: 85 00:04:25.669 00:04:25.669 DPDK 24.03.0 00:04:25.669 00:04:25.669 User defined options 00:04:25.669 buildtype : debug 00:04:25.669 default_library : shared 00:04:25.669 libdir : lib 00:04:25.669 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:25.669 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:25.669 c_link_args : 00:04:25.669 cpu_instruction_set: native 00:04:25.669 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:25.669 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:25.669 enable_docs : false 00:04:25.669 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:25.669 enable_kmods : false 00:04:25.669 max_lcores : 128 00:04:25.669 tests : false 00:04:25.669 00:04:25.669 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:25.669 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:25.669 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:25.927 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:25.927 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:25.927 [4/268] Linking static target lib/librte_kvargs.a 00:04:25.927 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:25.927 [6/268] Linking static target lib/librte_log.a 00:04:26.185 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:26.185 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:26.444 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:26.444 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:26.444 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.444 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:26.444 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:26.444 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:26.444 [15/268] Linking static target lib/librte_telemetry.a 00:04:26.444 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:26.444 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:26.702 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:26.976 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.274 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:27.274 [21/268] Linking target lib/librte_log.so.24.1 00:04:27.274 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:27.274 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:27.274 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:27.274 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:27.274 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:27.274 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:27.532 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:27.532 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:27.532 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:27.532 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:27.532 [32/268] Linking target lib/librte_kvargs.so.24.1 00:04:27.532 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.532 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:27.791 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:27.791 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:28.049 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:28.049 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:28.049 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:28.049 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:28.049 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:28.049 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:28.049 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:28.049 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:28.049 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:28.049 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:28.049 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:28.307 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:28.564 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:28.564 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:28.564 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:28.822 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:28.822 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:28.822 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:28.822 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:28.822 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:28.822 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:29.080 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:29.080 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:29.080 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:29.080 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:29.338 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:29.338 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:29.338 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:29.596 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:29.596 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:29.596 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:29.596 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:29.854 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:29.854 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:29.854 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:30.112 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:30.112 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:30.112 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:30.112 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:30.112 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:30.112 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:30.112 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:30.369 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:30.369 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:30.369 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:30.369 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:30.934 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:30.934 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:30.934 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:30.934 [86/268] Linking static target lib/librte_ring.a 00:04:30.934 [87/268] Linking static target lib/librte_eal.a 00:04:30.934 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:30.934 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:30.934 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:30.934 [91/268] Linking static target lib/librte_rcu.a 00:04:31.192 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:31.192 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:31.192 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:31.192 [95/268] Linking static target lib/librte_mempool.a 00:04:31.450 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:31.450 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:31.708 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.708 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:31.708 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.708 [101/268] Linking static target lib/librte_mbuf.a 00:04:31.708 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:31.708 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:31.708 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:32.035 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:32.035 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:32.035 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:32.035 [108/268] Linking static target lib/librte_meter.a 00:04:32.293 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:32.293 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:32.293 [111/268] Linking static target lib/librte_net.a 00:04:32.293 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:32.293 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:32.550 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:32.550 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.806 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:32.806 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.806 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.064 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.064 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:33.321 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:33.321 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:33.321 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:33.579 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:33.579 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:33.579 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:33.579 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:33.579 [128/268] Linking static target lib/librte_pci.a 00:04:33.579 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:33.579 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:33.836 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:33.836 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:33.836 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:34.093 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:34.093 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:34.093 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:34.093 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:34.093 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:34.093 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:34.093 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:34.093 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:34.093 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.093 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:34.093 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:34.093 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:34.093 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:34.351 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:34.351 [148/268] Linking static target lib/librte_cmdline.a 00:04:34.610 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:34.610 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:34.868 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:34.868 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:34.868 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:34.868 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:34.868 [155/268] Linking static target lib/librte_timer.a 00:04:34.868 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:34.868 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:35.128 [158/268] Linking static target lib/librte_hash.a 00:04:35.129 [159/268] Linking static target lib/librte_ethdev.a 00:04:35.388 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:35.388 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:35.388 [162/268] Linking static target lib/librte_compressdev.a 00:04:35.388 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:35.646 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:35.646 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:35.646 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:35.646 [167/268] Linking static target lib/librte_dmadev.a 00:04:35.646 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.904 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:36.162 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:36.163 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:36.163 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:36.163 [173/268] Linking static target lib/librte_cryptodev.a 00:04:36.163 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:36.420 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:36.421 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.421 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.678 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.678 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:36.678 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:36.939 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.939 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:36.939 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:36.939 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:37.198 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:37.198 [186/268] Linking static target lib/librte_power.a 00:04:37.456 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:37.456 [188/268] Linking static target lib/librte_security.a 00:04:37.456 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:37.456 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:37.456 [191/268] Linking static target lib/librte_reorder.a 00:04:37.456 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:37.456 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:38.021 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:38.279 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:38.279 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.537 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.537 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:38.537 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:38.795 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.795 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:39.054 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:39.312 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:39.312 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:39.312 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:39.312 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:39.570 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:39.570 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:39.570 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:39.570 [210/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.570 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:39.570 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:39.829 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:39.829 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:39.829 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:39.829 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:39.829 [217/268] Linking static target drivers/librte_bus_vdev.a 00:04:39.829 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:39.829 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:39.829 [220/268] Linking static target drivers/librte_bus_pci.a 00:04:40.087 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:40.087 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:40.087 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:40.345 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.345 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:40.345 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:40.345 [227/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:40.604 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.220 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:41.220 [230/268] Linking static target lib/librte_vhost.a 00:04:43.119 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.686 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.944 [233/268] Linking target lib/librte_eal.so.24.1 00:04:43.944 [234/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.944 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:44.202 [236/268] Linking target lib/librte_meter.so.24.1 00:04:44.202 [237/268] Linking target lib/librte_timer.so.24.1 00:04:44.202 [238/268] Linking target lib/librte_dmadev.so.24.1 00:04:44.202 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:44.202 [240/268] Linking target lib/librte_pci.so.24.1 00:04:44.202 [241/268] Linking target lib/librte_ring.so.24.1 00:04:44.202 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:44.202 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:44.202 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:44.202 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:44.202 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:44.459 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:44.459 [248/268] Linking target lib/librte_rcu.so.24.1 00:04:44.459 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:44.459 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:44.459 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:44.717 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:44.717 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:44.717 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:44.717 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:44.717 [256/268] Linking target lib/librte_net.so.24.1 00:04:44.717 [257/268] Linking target lib/librte_reorder.so.24.1 00:04:44.717 [258/268] Linking target lib/librte_compressdev.so.24.1 00:04:44.976 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:44.976 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:44.976 [261/268] Linking target lib/librte_security.so.24.1 00:04:44.976 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:44.976 [263/268] Linking target lib/librte_hash.so.24.1 00:04:44.976 [264/268] Linking target lib/librte_cmdline.so.24.1 00:04:45.233 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:45.234 [266/268] Linking target lib/librte_power.so.24.1 00:04:45.234 [267/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:45.234 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:45.234 INFO: autodetecting backend as ninja 00:04:45.234 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:17.340 CC lib/ut_mock/mock.o 00:05:17.340 CC lib/log/log_flags.o 00:05:17.340 CC lib/log/log.o 00:05:17.340 CC lib/log/log_deprecated.o 00:05:17.340 CC lib/ut/ut.o 00:05:17.340 LIB libspdk_ut.a 00:05:17.340 LIB libspdk_ut_mock.a 00:05:17.340 SO libspdk_ut_mock.so.6.0 00:05:17.340 SO libspdk_ut.so.2.0 00:05:17.340 LIB libspdk_log.a 00:05:17.340 SYMLINK libspdk_ut_mock.so 00:05:17.340 SO libspdk_log.so.7.1 00:05:17.340 SYMLINK libspdk_ut.so 00:05:17.340 SYMLINK libspdk_log.so 00:05:17.340 CXX lib/trace_parser/trace.o 00:05:17.340 CC lib/util/base64.o 00:05:17.340 CC lib/dma/dma.o 00:05:17.340 CC lib/util/bit_array.o 00:05:17.340 CC lib/util/cpuset.o 00:05:17.340 CC lib/util/crc16.o 00:05:17.340 CC lib/util/crc32.o 00:05:17.340 CC lib/util/crc32c.o 00:05:17.340 CC lib/ioat/ioat.o 00:05:17.340 CC lib/vfio_user/host/vfio_user_pci.o 00:05:17.340 CC lib/util/crc32_ieee.o 00:05:17.340 CC lib/vfio_user/host/vfio_user.o 00:05:17.340 CC lib/util/crc64.o 00:05:17.340 CC lib/util/dif.o 00:05:17.340 LIB libspdk_dma.a 00:05:17.340 CC lib/util/fd.o 00:05:17.340 CC lib/util/fd_group.o 00:05:17.340 SO libspdk_dma.so.5.0 00:05:17.340 SYMLINK libspdk_dma.so 00:05:17.340 CC lib/util/file.o 00:05:17.340 CC lib/util/hexlify.o 00:05:17.340 LIB libspdk_ioat.a 00:05:17.340 CC lib/util/iov.o 00:05:17.340 CC lib/util/math.o 00:05:17.340 SO libspdk_ioat.so.7.0 00:05:17.340 LIB libspdk_vfio_user.a 00:05:17.340 CC lib/util/net.o 00:05:17.340 SYMLINK libspdk_ioat.so 00:05:17.340 CC lib/util/pipe.o 00:05:17.340 SO libspdk_vfio_user.so.5.0 00:05:17.340 CC lib/util/strerror_tls.o 00:05:17.340 SYMLINK libspdk_vfio_user.so 00:05:17.340 CC lib/util/string.o 00:05:17.340 CC lib/util/uuid.o 00:05:17.340 CC lib/util/xor.o 00:05:17.340 CC lib/util/zipf.o 00:05:17.340 CC lib/util/md5.o 00:05:17.340 LIB libspdk_util.a 00:05:17.340 SO libspdk_util.so.10.1 00:05:17.340 LIB libspdk_trace_parser.a 00:05:17.340 SO libspdk_trace_parser.so.6.0 00:05:17.340 SYMLINK libspdk_trace_parser.so 00:05:17.340 SYMLINK libspdk_util.so 00:05:17.340 CC lib/json/json_util.o 00:05:17.340 CC lib/json/json_parse.o 00:05:17.340 CC lib/json/json_write.o 00:05:17.340 CC lib/rdma_utils/rdma_utils.o 00:05:17.340 CC lib/conf/conf.o 00:05:17.340 CC lib/idxd/idxd.o 00:05:17.340 CC lib/idxd/idxd_user.o 00:05:17.340 CC lib/idxd/idxd_kernel.o 00:05:17.340 CC lib/vmd/vmd.o 00:05:17.340 CC lib/env_dpdk/env.o 00:05:17.340 CC lib/env_dpdk/memory.o 00:05:17.340 CC lib/vmd/led.o 00:05:17.340 LIB libspdk_conf.a 00:05:17.340 CC lib/env_dpdk/pci.o 00:05:17.340 SO libspdk_conf.so.6.0 00:05:17.340 LIB libspdk_rdma_utils.a 00:05:17.340 CC lib/env_dpdk/init.o 00:05:17.340 SO libspdk_rdma_utils.so.1.0 00:05:17.340 SYMLINK libspdk_conf.so 00:05:17.340 CC lib/env_dpdk/threads.o 00:05:17.340 CC lib/env_dpdk/pci_ioat.o 00:05:17.340 LIB libspdk_json.a 00:05:17.340 SYMLINK libspdk_rdma_utils.so 00:05:17.340 SO libspdk_json.so.6.0 00:05:17.340 SYMLINK libspdk_json.so 00:05:17.340 CC lib/env_dpdk/pci_virtio.o 00:05:17.340 CC lib/env_dpdk/pci_vmd.o 00:05:17.340 CC lib/env_dpdk/pci_idxd.o 00:05:17.340 LIB libspdk_vmd.a 00:05:17.340 CC lib/rdma_provider/common.o 00:05:17.340 CC lib/env_dpdk/pci_event.o 00:05:17.340 LIB libspdk_idxd.a 00:05:17.340 SO libspdk_vmd.so.6.0 00:05:17.340 CC lib/env_dpdk/sigbus_handler.o 00:05:17.340 CC lib/env_dpdk/pci_dpdk.o 00:05:17.340 SO libspdk_idxd.so.12.1 00:05:17.340 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:17.340 SYMLINK libspdk_vmd.so 00:05:17.340 SYMLINK libspdk_idxd.so 00:05:17.340 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:17.340 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:17.340 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:17.340 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:17.340 CC lib/jsonrpc/jsonrpc_server.o 00:05:17.340 CC lib/jsonrpc/jsonrpc_client.o 00:05:17.340 LIB libspdk_rdma_provider.a 00:05:17.340 SO libspdk_rdma_provider.so.7.0 00:05:17.340 SYMLINK libspdk_rdma_provider.so 00:05:17.340 LIB libspdk_jsonrpc.a 00:05:17.340 SO libspdk_jsonrpc.so.6.0 00:05:17.340 LIB libspdk_env_dpdk.a 00:05:17.340 SYMLINK libspdk_jsonrpc.so 00:05:17.340 SO libspdk_env_dpdk.so.15.1 00:05:17.340 SYMLINK libspdk_env_dpdk.so 00:05:17.340 CC lib/rpc/rpc.o 00:05:17.340 LIB libspdk_rpc.a 00:05:17.340 SO libspdk_rpc.so.6.0 00:05:17.340 SYMLINK libspdk_rpc.so 00:05:17.340 CC lib/trace/trace.o 00:05:17.340 CC lib/trace/trace_flags.o 00:05:17.340 CC lib/trace/trace_rpc.o 00:05:17.340 CC lib/notify/notify.o 00:05:17.340 CC lib/notify/notify_rpc.o 00:05:17.340 CC lib/keyring/keyring.o 00:05:17.340 CC lib/keyring/keyring_rpc.o 00:05:17.340 LIB libspdk_notify.a 00:05:17.340 SO libspdk_notify.so.6.0 00:05:17.340 LIB libspdk_trace.a 00:05:17.340 SYMLINK libspdk_notify.so 00:05:17.340 SO libspdk_trace.so.11.0 00:05:17.340 LIB libspdk_keyring.a 00:05:17.599 SO libspdk_keyring.so.2.0 00:05:17.599 SYMLINK libspdk_trace.so 00:05:17.599 SYMLINK libspdk_keyring.so 00:05:17.858 CC lib/thread/iobuf.o 00:05:17.858 CC lib/thread/thread.o 00:05:17.858 CC lib/sock/sock_rpc.o 00:05:17.858 CC lib/sock/sock.o 00:05:18.427 LIB libspdk_sock.a 00:05:18.427 SO libspdk_sock.so.10.0 00:05:18.427 SYMLINK libspdk_sock.so 00:05:18.686 CC lib/nvme/nvme_ctrlr.o 00:05:18.686 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:18.686 CC lib/nvme/nvme_pcie_common.o 00:05:18.686 CC lib/nvme/nvme_fabric.o 00:05:18.686 CC lib/nvme/nvme_qpair.o 00:05:18.686 CC lib/nvme/nvme_ns_cmd.o 00:05:18.686 CC lib/nvme/nvme_ns.o 00:05:18.686 CC lib/nvme/nvme_pcie.o 00:05:18.686 CC lib/nvme/nvme.o 00:05:19.624 LIB libspdk_thread.a 00:05:19.624 SO libspdk_thread.so.11.0 00:05:19.624 CC lib/nvme/nvme_quirks.o 00:05:19.624 CC lib/nvme/nvme_transport.o 00:05:19.624 SYMLINK libspdk_thread.so 00:05:19.624 CC lib/nvme/nvme_discovery.o 00:05:19.883 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:19.883 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:19.883 CC lib/nvme/nvme_tcp.o 00:05:19.883 CC lib/nvme/nvme_opal.o 00:05:20.142 CC lib/nvme/nvme_io_msg.o 00:05:20.142 CC lib/nvme/nvme_poll_group.o 00:05:20.142 CC lib/nvme/nvme_zns.o 00:05:20.142 CC lib/nvme/nvme_stubs.o 00:05:20.400 CC lib/nvme/nvme_auth.o 00:05:20.658 CC lib/accel/accel.o 00:05:20.658 CC lib/blob/blobstore.o 00:05:20.658 CC lib/init/json_config.o 00:05:20.917 CC lib/accel/accel_rpc.o 00:05:20.917 CC lib/accel/accel_sw.o 00:05:20.917 CC lib/init/subsystem.o 00:05:20.917 CC lib/blob/request.o 00:05:21.175 CC lib/virtio/virtio.o 00:05:21.175 CC lib/init/subsystem_rpc.o 00:05:21.175 CC lib/blob/zeroes.o 00:05:21.175 CC lib/init/rpc.o 00:05:21.175 CC lib/fsdev/fsdev.o 00:05:21.175 CC lib/fsdev/fsdev_io.o 00:05:21.434 LIB libspdk_init.a 00:05:21.434 CC lib/blob/blob_bs_dev.o 00:05:21.434 CC lib/fsdev/fsdev_rpc.o 00:05:21.434 CC lib/virtio/virtio_vhost_user.o 00:05:21.434 SO libspdk_init.so.6.0 00:05:21.434 SYMLINK libspdk_init.so 00:05:21.434 CC lib/virtio/virtio_vfio_user.o 00:05:21.434 CC lib/virtio/virtio_pci.o 00:05:21.434 CC lib/nvme/nvme_cuse.o 00:05:21.434 CC lib/nvme/nvme_rdma.o 00:05:21.693 LIB libspdk_accel.a 00:05:21.693 SO libspdk_accel.so.16.0 00:05:21.693 CC lib/event/app.o 00:05:21.693 CC lib/event/reactor.o 00:05:21.693 CC lib/event/log_rpc.o 00:05:21.693 CC lib/event/app_rpc.o 00:05:21.693 LIB libspdk_virtio.a 00:05:21.953 SYMLINK libspdk_accel.so 00:05:21.953 CC lib/event/scheduler_static.o 00:05:21.953 SO libspdk_virtio.so.7.0 00:05:21.953 LIB libspdk_fsdev.a 00:05:21.953 SYMLINK libspdk_virtio.so 00:05:21.953 SO libspdk_fsdev.so.2.0 00:05:21.953 SYMLINK libspdk_fsdev.so 00:05:22.212 CC lib/bdev/bdev.o 00:05:22.212 CC lib/bdev/bdev_rpc.o 00:05:22.212 CC lib/bdev/bdev_zone.o 00:05:22.212 CC lib/bdev/part.o 00:05:22.212 CC lib/bdev/scsi_nvme.o 00:05:22.212 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:22.212 LIB libspdk_event.a 00:05:22.212 SO libspdk_event.so.14.0 00:05:22.472 SYMLINK libspdk_event.so 00:05:22.730 LIB libspdk_nvme.a 00:05:22.988 LIB libspdk_fuse_dispatcher.a 00:05:22.988 SO libspdk_fuse_dispatcher.so.1.0 00:05:22.988 SYMLINK libspdk_fuse_dispatcher.so 00:05:22.988 SO libspdk_nvme.so.15.0 00:05:23.247 SYMLINK libspdk_nvme.so 00:05:23.833 LIB libspdk_blob.a 00:05:23.833 SO libspdk_blob.so.12.0 00:05:24.091 SYMLINK libspdk_blob.so 00:05:24.351 CC lib/lvol/lvol.o 00:05:24.351 CC lib/blobfs/tree.o 00:05:24.351 CC lib/blobfs/blobfs.o 00:05:24.917 LIB libspdk_bdev.a 00:05:24.917 SO libspdk_bdev.so.17.0 00:05:25.175 SYMLINK libspdk_bdev.so 00:05:25.175 LIB libspdk_blobfs.a 00:05:25.175 SO libspdk_blobfs.so.11.0 00:05:25.175 SYMLINK libspdk_blobfs.so 00:05:25.433 CC lib/nbd/nbd.o 00:05:25.433 CC lib/nbd/nbd_rpc.o 00:05:25.433 LIB libspdk_lvol.a 00:05:25.433 CC lib/nvmf/ctrlr.o 00:05:25.433 CC lib/nvmf/ctrlr_bdev.o 00:05:25.433 CC lib/nvmf/subsystem.o 00:05:25.433 CC lib/scsi/dev.o 00:05:25.433 CC lib/nvmf/ctrlr_discovery.o 00:05:25.433 CC lib/ublk/ublk.o 00:05:25.433 CC lib/ftl/ftl_core.o 00:05:25.433 SO libspdk_lvol.so.11.0 00:05:25.433 SYMLINK libspdk_lvol.so 00:05:25.433 CC lib/ftl/ftl_init.o 00:05:25.690 CC lib/scsi/lun.o 00:05:25.690 CC lib/scsi/port.o 00:05:25.690 CC lib/nvmf/nvmf.o 00:05:25.690 LIB libspdk_nbd.a 00:05:25.690 CC lib/ftl/ftl_layout.o 00:05:25.690 SO libspdk_nbd.so.7.0 00:05:25.947 SYMLINK libspdk_nbd.so 00:05:25.947 CC lib/ftl/ftl_debug.o 00:05:25.947 CC lib/nvmf/nvmf_rpc.o 00:05:25.947 CC lib/nvmf/transport.o 00:05:25.947 CC lib/scsi/scsi.o 00:05:26.206 CC lib/ublk/ublk_rpc.o 00:05:26.206 CC lib/scsi/scsi_bdev.o 00:05:26.206 CC lib/ftl/ftl_io.o 00:05:26.206 CC lib/nvmf/tcp.o 00:05:26.206 CC lib/nvmf/stubs.o 00:05:26.206 LIB libspdk_ublk.a 00:05:26.206 SO libspdk_ublk.so.3.0 00:05:26.464 SYMLINK libspdk_ublk.so 00:05:26.465 CC lib/nvmf/mdns_server.o 00:05:26.465 CC lib/ftl/ftl_sb.o 00:05:26.723 CC lib/nvmf/rdma.o 00:05:26.723 CC lib/scsi/scsi_pr.o 00:05:26.723 CC lib/ftl/ftl_l2p.o 00:05:26.723 CC lib/nvmf/auth.o 00:05:26.723 CC lib/ftl/ftl_l2p_flat.o 00:05:26.723 CC lib/ftl/ftl_nv_cache.o 00:05:26.723 CC lib/ftl/ftl_band.o 00:05:26.981 CC lib/ftl/ftl_band_ops.o 00:05:26.981 CC lib/ftl/ftl_writer.o 00:05:26.981 CC lib/ftl/ftl_rq.o 00:05:26.981 CC lib/scsi/scsi_rpc.o 00:05:27.355 CC lib/ftl/ftl_reloc.o 00:05:27.355 CC lib/ftl/ftl_l2p_cache.o 00:05:27.355 CC lib/scsi/task.o 00:05:27.355 CC lib/ftl/ftl_p2l.o 00:05:27.355 CC lib/ftl/ftl_p2l_log.o 00:05:27.355 CC lib/ftl/mngt/ftl_mngt.o 00:05:27.617 LIB libspdk_scsi.a 00:05:27.617 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:27.617 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:27.617 SO libspdk_scsi.so.9.0 00:05:27.617 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:27.617 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:27.617 SYMLINK libspdk_scsi.so 00:05:27.617 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:27.617 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:27.875 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:27.875 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:27.875 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:27.875 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:27.875 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:27.875 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:27.875 CC lib/ftl/utils/ftl_conf.o 00:05:28.133 CC lib/ftl/utils/ftl_md.o 00:05:28.133 CC lib/ftl/utils/ftl_mempool.o 00:05:28.133 CC lib/ftl/utils/ftl_bitmap.o 00:05:28.133 CC lib/ftl/utils/ftl_property.o 00:05:28.133 CC lib/iscsi/conn.o 00:05:28.133 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:28.133 CC lib/iscsi/init_grp.o 00:05:28.133 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:28.391 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:28.391 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:28.391 CC lib/iscsi/iscsi.o 00:05:28.391 CC lib/iscsi/param.o 00:05:28.391 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:28.649 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:28.649 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:28.649 CC lib/iscsi/portal_grp.o 00:05:28.649 CC lib/iscsi/tgt_node.o 00:05:28.649 CC lib/vhost/vhost.o 00:05:28.649 CC lib/vhost/vhost_rpc.o 00:05:28.649 CC lib/vhost/vhost_scsi.o 00:05:28.907 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:28.907 CC lib/iscsi/iscsi_subsystem.o 00:05:28.907 CC lib/iscsi/iscsi_rpc.o 00:05:28.907 LIB libspdk_nvmf.a 00:05:28.907 CC lib/iscsi/task.o 00:05:28.907 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:29.165 SO libspdk_nvmf.so.20.0 00:05:29.165 CC lib/vhost/vhost_blk.o 00:05:29.165 CC lib/vhost/rte_vhost_user.o 00:05:29.165 SYMLINK libspdk_nvmf.so 00:05:29.165 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:29.165 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:29.423 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:29.423 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:29.423 CC lib/ftl/base/ftl_base_dev.o 00:05:29.423 CC lib/ftl/base/ftl_base_bdev.o 00:05:29.423 CC lib/ftl/ftl_trace.o 00:05:29.681 LIB libspdk_ftl.a 00:05:29.939 LIB libspdk_iscsi.a 00:05:29.939 SO libspdk_iscsi.so.8.0 00:05:30.197 SO libspdk_ftl.so.9.0 00:05:30.197 SYMLINK libspdk_iscsi.so 00:05:30.197 LIB libspdk_vhost.a 00:05:30.455 SO libspdk_vhost.so.8.0 00:05:30.455 SYMLINK libspdk_ftl.so 00:05:30.455 SYMLINK libspdk_vhost.so 00:05:30.713 CC module/env_dpdk/env_dpdk_rpc.o 00:05:30.971 CC module/keyring/file/keyring.o 00:05:30.971 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:30.971 CC module/keyring/linux/keyring.o 00:05:30.971 CC module/scheduler/gscheduler/gscheduler.o 00:05:30.971 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:30.971 CC module/sock/posix/posix.o 00:05:30.971 CC module/blob/bdev/blob_bdev.o 00:05:30.971 CC module/accel/error/accel_error.o 00:05:30.971 LIB libspdk_env_dpdk_rpc.a 00:05:30.971 CC module/fsdev/aio/fsdev_aio.o 00:05:30.971 SO libspdk_env_dpdk_rpc.so.6.0 00:05:31.229 CC module/keyring/linux/keyring_rpc.o 00:05:31.229 SYMLINK libspdk_env_dpdk_rpc.so 00:05:31.229 CC module/keyring/file/keyring_rpc.o 00:05:31.229 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:31.229 LIB libspdk_scheduler_dpdk_governor.a 00:05:31.229 LIB libspdk_scheduler_gscheduler.a 00:05:31.229 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:31.229 SO libspdk_scheduler_gscheduler.so.4.0 00:05:31.229 CC module/accel/error/accel_error_rpc.o 00:05:31.229 LIB libspdk_scheduler_dynamic.a 00:05:31.229 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:31.229 SO libspdk_scheduler_dynamic.so.4.0 00:05:31.229 SYMLINK libspdk_scheduler_gscheduler.so 00:05:31.229 LIB libspdk_keyring_file.a 00:05:31.229 LIB libspdk_keyring_linux.a 00:05:31.229 CC module/fsdev/aio/linux_aio_mgr.o 00:05:31.229 SO libspdk_keyring_linux.so.1.0 00:05:31.229 LIB libspdk_blob_bdev.a 00:05:31.229 SO libspdk_keyring_file.so.2.0 00:05:31.229 SYMLINK libspdk_scheduler_dynamic.so 00:05:31.488 SO libspdk_blob_bdev.so.12.0 00:05:31.488 LIB libspdk_accel_error.a 00:05:31.488 SYMLINK libspdk_keyring_linux.so 00:05:31.488 SO libspdk_accel_error.so.2.0 00:05:31.488 SYMLINK libspdk_keyring_file.so 00:05:31.488 SYMLINK libspdk_blob_bdev.so 00:05:31.488 SYMLINK libspdk_accel_error.so 00:05:31.488 CC module/accel/ioat/accel_ioat.o 00:05:31.488 CC module/accel/ioat/accel_ioat_rpc.o 00:05:31.488 CC module/accel/dsa/accel_dsa.o 00:05:31.488 CC module/accel/dsa/accel_dsa_rpc.o 00:05:31.746 CC module/accel/iaa/accel_iaa.o 00:05:31.746 CC module/sock/uring/uring.o 00:05:31.746 LIB libspdk_accel_ioat.a 00:05:31.746 LIB libspdk_fsdev_aio.a 00:05:31.746 CC module/accel/iaa/accel_iaa_rpc.o 00:05:31.746 LIB libspdk_sock_posix.a 00:05:31.746 SO libspdk_accel_ioat.so.6.0 00:05:31.746 CC module/bdev/delay/vbdev_delay.o 00:05:31.746 CC module/blobfs/bdev/blobfs_bdev.o 00:05:31.746 SO libspdk_fsdev_aio.so.1.0 00:05:31.746 SO libspdk_sock_posix.so.6.0 00:05:31.746 SYMLINK libspdk_fsdev_aio.so 00:05:31.746 LIB libspdk_accel_dsa.a 00:05:31.746 SYMLINK libspdk_accel_ioat.so 00:05:32.005 CC module/bdev/error/vbdev_error.o 00:05:32.005 CC module/bdev/error/vbdev_error_rpc.o 00:05:32.005 SYMLINK libspdk_sock_posix.so 00:05:32.005 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:32.005 SO libspdk_accel_dsa.so.5.0 00:05:32.005 LIB libspdk_accel_iaa.a 00:05:32.005 SO libspdk_accel_iaa.so.3.0 00:05:32.005 SYMLINK libspdk_accel_dsa.so 00:05:32.005 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:32.005 SYMLINK libspdk_accel_iaa.so 00:05:32.005 CC module/bdev/gpt/gpt.o 00:05:32.005 CC module/bdev/lvol/vbdev_lvol.o 00:05:32.005 LIB libspdk_blobfs_bdev.a 00:05:32.263 SO libspdk_blobfs_bdev.so.6.0 00:05:32.263 LIB libspdk_bdev_error.a 00:05:32.263 CC module/bdev/malloc/bdev_malloc.o 00:05:32.263 SO libspdk_bdev_error.so.6.0 00:05:32.263 CC module/bdev/null/bdev_null.o 00:05:32.263 SYMLINK libspdk_blobfs_bdev.so 00:05:32.263 CC module/bdev/null/bdev_null_rpc.o 00:05:32.263 LIB libspdk_bdev_delay.a 00:05:32.263 CC module/bdev/nvme/bdev_nvme.o 00:05:32.263 SO libspdk_bdev_delay.so.6.0 00:05:32.263 SYMLINK libspdk_bdev_error.so 00:05:32.263 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:32.263 CC module/bdev/gpt/vbdev_gpt.o 00:05:32.263 SYMLINK libspdk_bdev_delay.so 00:05:32.263 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:32.263 LIB libspdk_sock_uring.a 00:05:32.263 CC module/bdev/passthru/vbdev_passthru.o 00:05:32.521 SO libspdk_sock_uring.so.5.0 00:05:32.521 CC module/bdev/nvme/nvme_rpc.o 00:05:32.521 SYMLINK libspdk_sock_uring.so 00:05:32.521 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:32.521 LIB libspdk_bdev_null.a 00:05:32.521 SO libspdk_bdev_null.so.6.0 00:05:32.521 LIB libspdk_bdev_gpt.a 00:05:32.521 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:32.779 SO libspdk_bdev_gpt.so.6.0 00:05:32.779 SYMLINK libspdk_bdev_null.so 00:05:32.779 LIB libspdk_bdev_malloc.a 00:05:32.779 CC module/bdev/nvme/bdev_mdns_client.o 00:05:32.779 SYMLINK libspdk_bdev_gpt.so 00:05:32.779 SO libspdk_bdev_malloc.so.6.0 00:05:32.779 LIB libspdk_bdev_lvol.a 00:05:32.779 CC module/bdev/raid/bdev_raid.o 00:05:32.779 SO libspdk_bdev_lvol.so.6.0 00:05:32.779 SYMLINK libspdk_bdev_malloc.so 00:05:32.779 CC module/bdev/raid/bdev_raid_rpc.o 00:05:32.779 LIB libspdk_bdev_passthru.a 00:05:32.779 CC module/bdev/split/vbdev_split.o 00:05:32.779 SO libspdk_bdev_passthru.so.6.0 00:05:32.779 SYMLINK libspdk_bdev_lvol.so 00:05:33.037 CC module/bdev/split/vbdev_split_rpc.o 00:05:33.037 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:33.037 CC module/bdev/uring/bdev_uring.o 00:05:33.037 SYMLINK libspdk_bdev_passthru.so 00:05:33.037 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:33.037 CC module/bdev/nvme/vbdev_opal.o 00:05:33.037 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:33.037 CC module/bdev/aio/bdev_aio.o 00:05:33.037 LIB libspdk_bdev_split.a 00:05:33.037 CC module/bdev/aio/bdev_aio_rpc.o 00:05:33.295 SO libspdk_bdev_split.so.6.0 00:05:33.295 SYMLINK libspdk_bdev_split.so 00:05:33.295 LIB libspdk_bdev_zone_block.a 00:05:33.295 CC module/bdev/uring/bdev_uring_rpc.o 00:05:33.295 SO libspdk_bdev_zone_block.so.6.0 00:05:33.295 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:33.295 CC module/bdev/ftl/bdev_ftl.o 00:05:33.295 SYMLINK libspdk_bdev_zone_block.so 00:05:33.295 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:33.295 CC module/bdev/raid/bdev_raid_sb.o 00:05:33.552 LIB libspdk_bdev_aio.a 00:05:33.552 CC module/bdev/iscsi/bdev_iscsi.o 00:05:33.552 SO libspdk_bdev_aio.so.6.0 00:05:33.552 LIB libspdk_bdev_uring.a 00:05:33.552 SO libspdk_bdev_uring.so.6.0 00:05:33.552 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:33.552 SYMLINK libspdk_bdev_aio.so 00:05:33.552 CC module/bdev/raid/raid0.o 00:05:33.552 CC module/bdev/raid/raid1.o 00:05:33.552 CC module/bdev/raid/concat.o 00:05:33.810 SYMLINK libspdk_bdev_uring.so 00:05:33.810 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:33.810 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:33.810 LIB libspdk_bdev_ftl.a 00:05:33.810 SO libspdk_bdev_ftl.so.6.0 00:05:33.810 SYMLINK libspdk_bdev_ftl.so 00:05:33.810 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:33.810 LIB libspdk_bdev_iscsi.a 00:05:34.068 LIB libspdk_bdev_raid.a 00:05:34.068 SO libspdk_bdev_iscsi.so.6.0 00:05:34.068 SO libspdk_bdev_raid.so.6.0 00:05:34.068 SYMLINK libspdk_bdev_iscsi.so 00:05:34.068 SYMLINK libspdk_bdev_raid.so 00:05:34.330 LIB libspdk_bdev_virtio.a 00:05:34.331 SO libspdk_bdev_virtio.so.6.0 00:05:34.331 SYMLINK libspdk_bdev_virtio.so 00:05:35.265 LIB libspdk_bdev_nvme.a 00:05:35.265 SO libspdk_bdev_nvme.so.7.1 00:05:35.265 SYMLINK libspdk_bdev_nvme.so 00:05:35.831 CC module/event/subsystems/vmd/vmd.o 00:05:35.831 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:35.831 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:35.831 CC module/event/subsystems/iobuf/iobuf.o 00:05:35.831 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:35.831 CC module/event/subsystems/scheduler/scheduler.o 00:05:35.831 CC module/event/subsystems/fsdev/fsdev.o 00:05:35.831 CC module/event/subsystems/sock/sock.o 00:05:35.831 CC module/event/subsystems/keyring/keyring.o 00:05:36.122 LIB libspdk_event_vmd.a 00:05:36.122 LIB libspdk_event_fsdev.a 00:05:36.122 LIB libspdk_event_iobuf.a 00:05:36.122 LIB libspdk_event_vhost_blk.a 00:05:36.122 LIB libspdk_event_scheduler.a 00:05:36.122 SO libspdk_event_fsdev.so.1.0 00:05:36.122 SO libspdk_event_vmd.so.6.0 00:05:36.122 SO libspdk_event_vhost_blk.so.3.0 00:05:36.122 LIB libspdk_event_keyring.a 00:05:36.122 SO libspdk_event_iobuf.so.3.0 00:05:36.122 SO libspdk_event_scheduler.so.4.0 00:05:36.122 SO libspdk_event_keyring.so.1.0 00:05:36.122 SYMLINK libspdk_event_fsdev.so 00:05:36.122 SYMLINK libspdk_event_vmd.so 00:05:36.122 SYMLINK libspdk_event_vhost_blk.so 00:05:36.122 SYMLINK libspdk_event_iobuf.so 00:05:36.122 SYMLINK libspdk_event_scheduler.so 00:05:36.122 LIB libspdk_event_sock.a 00:05:36.122 SYMLINK libspdk_event_keyring.so 00:05:36.122 SO libspdk_event_sock.so.5.0 00:05:36.122 SYMLINK libspdk_event_sock.so 00:05:36.380 CC module/event/subsystems/accel/accel.o 00:05:36.638 LIB libspdk_event_accel.a 00:05:36.638 SO libspdk_event_accel.so.6.0 00:05:36.638 SYMLINK libspdk_event_accel.so 00:05:37.224 CC module/event/subsystems/bdev/bdev.o 00:05:37.224 LIB libspdk_event_bdev.a 00:05:37.482 SO libspdk_event_bdev.so.6.0 00:05:37.482 SYMLINK libspdk_event_bdev.so 00:05:37.740 CC module/event/subsystems/nbd/nbd.o 00:05:37.740 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:37.740 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:37.740 CC module/event/subsystems/ublk/ublk.o 00:05:37.740 CC module/event/subsystems/scsi/scsi.o 00:05:37.999 LIB libspdk_event_ublk.a 00:05:37.999 LIB libspdk_event_nbd.a 00:05:37.999 LIB libspdk_event_scsi.a 00:05:37.999 SO libspdk_event_ublk.so.3.0 00:05:37.999 SO libspdk_event_nbd.so.6.0 00:05:37.999 SO libspdk_event_scsi.so.6.0 00:05:37.999 LIB libspdk_event_nvmf.a 00:05:37.999 SYMLINK libspdk_event_ublk.so 00:05:37.999 SYMLINK libspdk_event_nbd.so 00:05:37.999 SO libspdk_event_nvmf.so.6.0 00:05:37.999 SYMLINK libspdk_event_scsi.so 00:05:38.258 SYMLINK libspdk_event_nvmf.so 00:05:38.258 CC module/event/subsystems/iscsi/iscsi.o 00:05:38.258 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:38.516 LIB libspdk_event_vhost_scsi.a 00:05:38.516 LIB libspdk_event_iscsi.a 00:05:38.516 SO libspdk_event_vhost_scsi.so.3.0 00:05:38.775 SO libspdk_event_iscsi.so.6.0 00:05:38.775 SYMLINK libspdk_event_vhost_scsi.so 00:05:38.775 SYMLINK libspdk_event_iscsi.so 00:05:39.034 SO libspdk.so.6.0 00:05:39.034 SYMLINK libspdk.so 00:05:39.293 CC test/rpc_client/rpc_client_test.o 00:05:39.293 TEST_HEADER include/spdk/accel.h 00:05:39.293 TEST_HEADER include/spdk/accel_module.h 00:05:39.293 TEST_HEADER include/spdk/assert.h 00:05:39.293 TEST_HEADER include/spdk/barrier.h 00:05:39.293 TEST_HEADER include/spdk/base64.h 00:05:39.293 TEST_HEADER include/spdk/bdev.h 00:05:39.293 TEST_HEADER include/spdk/bdev_module.h 00:05:39.293 TEST_HEADER include/spdk/bdev_zone.h 00:05:39.293 TEST_HEADER include/spdk/bit_array.h 00:05:39.293 TEST_HEADER include/spdk/bit_pool.h 00:05:39.293 TEST_HEADER include/spdk/blob_bdev.h 00:05:39.293 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:39.293 CXX app/trace/trace.o 00:05:39.293 TEST_HEADER include/spdk/blobfs.h 00:05:39.293 TEST_HEADER include/spdk/blob.h 00:05:39.293 TEST_HEADER include/spdk/conf.h 00:05:39.293 TEST_HEADER include/spdk/config.h 00:05:39.293 TEST_HEADER include/spdk/cpuset.h 00:05:39.293 TEST_HEADER include/spdk/crc16.h 00:05:39.293 TEST_HEADER include/spdk/crc32.h 00:05:39.293 TEST_HEADER include/spdk/crc64.h 00:05:39.293 TEST_HEADER include/spdk/dif.h 00:05:39.293 TEST_HEADER include/spdk/dma.h 00:05:39.293 TEST_HEADER include/spdk/endian.h 00:05:39.293 TEST_HEADER include/spdk/env_dpdk.h 00:05:39.293 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:39.293 TEST_HEADER include/spdk/env.h 00:05:39.293 TEST_HEADER include/spdk/event.h 00:05:39.293 TEST_HEADER include/spdk/fd_group.h 00:05:39.293 TEST_HEADER include/spdk/fd.h 00:05:39.293 TEST_HEADER include/spdk/file.h 00:05:39.293 TEST_HEADER include/spdk/fsdev.h 00:05:39.293 TEST_HEADER include/spdk/fsdev_module.h 00:05:39.293 TEST_HEADER include/spdk/ftl.h 00:05:39.293 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:39.293 TEST_HEADER include/spdk/gpt_spec.h 00:05:39.293 TEST_HEADER include/spdk/hexlify.h 00:05:39.293 TEST_HEADER include/spdk/histogram_data.h 00:05:39.293 TEST_HEADER include/spdk/idxd.h 00:05:39.293 TEST_HEADER include/spdk/idxd_spec.h 00:05:39.293 TEST_HEADER include/spdk/init.h 00:05:39.293 TEST_HEADER include/spdk/ioat.h 00:05:39.293 TEST_HEADER include/spdk/ioat_spec.h 00:05:39.293 CC test/thread/poller_perf/poller_perf.o 00:05:39.293 TEST_HEADER include/spdk/iscsi_spec.h 00:05:39.293 TEST_HEADER include/spdk/json.h 00:05:39.293 TEST_HEADER include/spdk/jsonrpc.h 00:05:39.293 TEST_HEADER include/spdk/keyring.h 00:05:39.293 CC examples/ioat/perf/perf.o 00:05:39.293 TEST_HEADER include/spdk/keyring_module.h 00:05:39.293 TEST_HEADER include/spdk/likely.h 00:05:39.293 TEST_HEADER include/spdk/log.h 00:05:39.293 CC examples/util/zipf/zipf.o 00:05:39.293 TEST_HEADER include/spdk/lvol.h 00:05:39.293 TEST_HEADER include/spdk/md5.h 00:05:39.293 TEST_HEADER include/spdk/memory.h 00:05:39.293 TEST_HEADER include/spdk/mmio.h 00:05:39.293 TEST_HEADER include/spdk/nbd.h 00:05:39.293 TEST_HEADER include/spdk/net.h 00:05:39.293 TEST_HEADER include/spdk/notify.h 00:05:39.293 TEST_HEADER include/spdk/nvme.h 00:05:39.293 TEST_HEADER include/spdk/nvme_intel.h 00:05:39.293 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:39.293 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:39.293 TEST_HEADER include/spdk/nvme_spec.h 00:05:39.293 TEST_HEADER include/spdk/nvme_zns.h 00:05:39.293 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:39.293 CC test/dma/test_dma/test_dma.o 00:05:39.293 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:39.293 TEST_HEADER include/spdk/nvmf.h 00:05:39.293 TEST_HEADER include/spdk/nvmf_spec.h 00:05:39.293 TEST_HEADER include/spdk/nvmf_transport.h 00:05:39.293 TEST_HEADER include/spdk/opal.h 00:05:39.293 TEST_HEADER include/spdk/opal_spec.h 00:05:39.552 TEST_HEADER include/spdk/pci_ids.h 00:05:39.552 CC test/app/bdev_svc/bdev_svc.o 00:05:39.552 TEST_HEADER include/spdk/pipe.h 00:05:39.552 TEST_HEADER include/spdk/queue.h 00:05:39.552 TEST_HEADER include/spdk/reduce.h 00:05:39.552 TEST_HEADER include/spdk/rpc.h 00:05:39.552 TEST_HEADER include/spdk/scheduler.h 00:05:39.552 TEST_HEADER include/spdk/scsi.h 00:05:39.552 TEST_HEADER include/spdk/scsi_spec.h 00:05:39.552 TEST_HEADER include/spdk/sock.h 00:05:39.552 TEST_HEADER include/spdk/stdinc.h 00:05:39.552 LINK rpc_client_test 00:05:39.552 TEST_HEADER include/spdk/string.h 00:05:39.552 TEST_HEADER include/spdk/thread.h 00:05:39.552 TEST_HEADER include/spdk/trace.h 00:05:39.552 TEST_HEADER include/spdk/trace_parser.h 00:05:39.552 TEST_HEADER include/spdk/tree.h 00:05:39.552 TEST_HEADER include/spdk/ublk.h 00:05:39.552 TEST_HEADER include/spdk/util.h 00:05:39.552 CC test/env/mem_callbacks/mem_callbacks.o 00:05:39.552 TEST_HEADER include/spdk/uuid.h 00:05:39.552 TEST_HEADER include/spdk/version.h 00:05:39.552 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:39.552 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:39.552 TEST_HEADER include/spdk/vhost.h 00:05:39.552 TEST_HEADER include/spdk/vmd.h 00:05:39.552 TEST_HEADER include/spdk/xor.h 00:05:39.552 TEST_HEADER include/spdk/zipf.h 00:05:39.552 CXX test/cpp_headers/accel.o 00:05:39.552 LINK interrupt_tgt 00:05:39.552 LINK zipf 00:05:39.552 LINK poller_perf 00:05:39.552 LINK bdev_svc 00:05:39.552 LINK ioat_perf 00:05:39.552 CXX test/cpp_headers/accel_module.o 00:05:39.811 LINK spdk_trace 00:05:39.811 CC app/trace_record/trace_record.o 00:05:39.811 CXX test/cpp_headers/assert.o 00:05:39.811 CC app/nvmf_tgt/nvmf_main.o 00:05:39.811 CXX test/cpp_headers/barrier.o 00:05:39.811 CC app/iscsi_tgt/iscsi_tgt.o 00:05:39.811 CC examples/ioat/verify/verify.o 00:05:40.070 CC app/spdk_tgt/spdk_tgt.o 00:05:40.070 LINK test_dma 00:05:40.070 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:40.070 LINK mem_callbacks 00:05:40.070 LINK spdk_trace_record 00:05:40.070 CXX test/cpp_headers/base64.o 00:05:40.070 LINK nvmf_tgt 00:05:40.070 LINK iscsi_tgt 00:05:40.070 LINK verify 00:05:40.328 LINK spdk_tgt 00:05:40.328 CC test/event/event_perf/event_perf.o 00:05:40.328 CXX test/cpp_headers/bdev.o 00:05:40.328 CC test/env/vtophys/vtophys.o 00:05:40.328 LINK event_perf 00:05:40.328 CC test/event/reactor/reactor.o 00:05:40.586 LINK vtophys 00:05:40.586 LINK nvme_fuzz 00:05:40.586 CXX test/cpp_headers/bdev_module.o 00:05:40.586 CC test/accel/dif/dif.o 00:05:40.586 LINK reactor 00:05:40.586 CC app/spdk_lspci/spdk_lspci.o 00:05:40.586 CC examples/thread/thread/thread_ex.o 00:05:40.844 CC test/blobfs/mkfs/mkfs.o 00:05:40.844 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:40.844 CC test/lvol/esnap/esnap.o 00:05:40.844 CXX test/cpp_headers/bdev_zone.o 00:05:40.844 LINK spdk_lspci 00:05:40.844 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:40.844 CC test/event/reactor_perf/reactor_perf.o 00:05:40.844 LINK mkfs 00:05:41.103 LINK thread 00:05:41.103 CC test/nvme/aer/aer.o 00:05:41.103 CXX test/cpp_headers/bit_array.o 00:05:41.103 LINK env_dpdk_post_init 00:05:41.103 LINK reactor_perf 00:05:41.103 CC app/spdk_nvme_perf/perf.o 00:05:41.103 CXX test/cpp_headers/bit_pool.o 00:05:41.362 LINK dif 00:05:41.362 CC test/nvme/reset/reset.o 00:05:41.362 LINK aer 00:05:41.362 CC test/env/memory/memory_ut.o 00:05:41.362 CXX test/cpp_headers/blob_bdev.o 00:05:41.362 CC test/event/app_repeat/app_repeat.o 00:05:41.362 CC examples/sock/hello_world/hello_sock.o 00:05:41.621 LINK reset 00:05:41.621 LINK app_repeat 00:05:41.621 CXX test/cpp_headers/blobfs_bdev.o 00:05:41.621 CC examples/vmd/lsvmd/lsvmd.o 00:05:41.621 CC examples/idxd/perf/perf.o 00:05:41.879 LINK hello_sock 00:05:41.879 CXX test/cpp_headers/blobfs.o 00:05:41.879 LINK lsvmd 00:05:41.879 CC test/nvme/sgl/sgl.o 00:05:41.879 CC test/event/scheduler/scheduler.o 00:05:42.137 CXX test/cpp_headers/blob.o 00:05:42.137 LINK spdk_nvme_perf 00:05:42.137 CC test/nvme/e2edp/nvme_dp.o 00:05:42.137 LINK idxd_perf 00:05:42.137 LINK scheduler 00:05:42.137 CC examples/vmd/led/led.o 00:05:42.137 CXX test/cpp_headers/conf.o 00:05:42.137 LINK sgl 00:05:42.396 CC app/spdk_nvme_identify/identify.o 00:05:42.396 LINK nvme_dp 00:05:42.396 CC test/app/histogram_perf/histogram_perf.o 00:05:42.396 LINK iscsi_fuzz 00:05:42.396 LINK led 00:05:42.396 CXX test/cpp_headers/config.o 00:05:42.396 CXX test/cpp_headers/cpuset.o 00:05:42.726 CC test/app/jsoncat/jsoncat.o 00:05:42.726 LINK histogram_perf 00:05:42.726 CXX test/cpp_headers/crc16.o 00:05:42.726 LINK memory_ut 00:05:42.726 CC test/nvme/overhead/overhead.o 00:05:42.726 LINK jsoncat 00:05:42.726 CC test/nvme/err_injection/err_injection.o 00:05:42.726 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:42.726 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:42.726 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:42.726 CXX test/cpp_headers/crc32.o 00:05:43.000 CXX test/cpp_headers/crc64.o 00:05:43.000 CXX test/cpp_headers/dif.o 00:05:43.000 LINK err_injection 00:05:43.000 CC test/env/pci/pci_ut.o 00:05:43.000 LINK overhead 00:05:43.000 LINK hello_fsdev 00:05:43.260 CXX test/cpp_headers/dma.o 00:05:43.260 CC examples/accel/perf/accel_perf.o 00:05:43.260 LINK vhost_fuzz 00:05:43.260 CC test/app/stub/stub.o 00:05:43.260 LINK spdk_nvme_identify 00:05:43.260 CC test/nvme/startup/startup.o 00:05:43.260 CXX test/cpp_headers/endian.o 00:05:43.519 CXX test/cpp_headers/env_dpdk.o 00:05:43.519 LINK pci_ut 00:05:43.519 LINK stub 00:05:43.519 CC examples/blob/hello_world/hello_blob.o 00:05:43.519 LINK startup 00:05:43.519 CC examples/nvme/hello_world/hello_world.o 00:05:43.519 CC app/spdk_nvme_discover/discovery_aer.o 00:05:43.519 CXX test/cpp_headers/env.o 00:05:43.778 CXX test/cpp_headers/event.o 00:05:43.778 CC examples/nvme/reconnect/reconnect.o 00:05:43.778 LINK accel_perf 00:05:43.778 CXX test/cpp_headers/fd_group.o 00:05:43.778 LINK hello_blob 00:05:43.778 LINK spdk_nvme_discover 00:05:43.778 LINK hello_world 00:05:43.778 CC test/nvme/reserve/reserve.o 00:05:44.058 CC app/spdk_top/spdk_top.o 00:05:44.058 CXX test/cpp_headers/fd.o 00:05:44.058 CC app/vhost/vhost.o 00:05:44.058 LINK reconnect 00:05:44.058 LINK reserve 00:05:44.058 CC app/spdk_dd/spdk_dd.o 00:05:44.058 CXX test/cpp_headers/file.o 00:05:44.058 CC examples/blob/cli/blobcli.o 00:05:44.316 LINK vhost 00:05:44.316 CC app/fio/nvme/fio_plugin.o 00:05:44.316 CC examples/bdev/hello_world/hello_bdev.o 00:05:44.316 CXX test/cpp_headers/fsdev.o 00:05:44.316 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:44.317 CC test/nvme/simple_copy/simple_copy.o 00:05:44.575 CXX test/cpp_headers/fsdev_module.o 00:05:44.575 LINK hello_bdev 00:05:44.575 LINK spdk_dd 00:05:44.575 LINK blobcli 00:05:44.575 CXX test/cpp_headers/ftl.o 00:05:44.575 CC examples/bdev/bdevperf/bdevperf.o 00:05:44.834 LINK simple_copy 00:05:44.834 CXX test/cpp_headers/fuse_dispatcher.o 00:05:44.834 LINK spdk_top 00:05:44.834 LINK spdk_nvme 00:05:44.834 LINK nvme_manage 00:05:44.834 CXX test/cpp_headers/gpt_spec.o 00:05:44.834 CXX test/cpp_headers/hexlify.o 00:05:45.093 CC test/nvme/connect_stress/connect_stress.o 00:05:45.093 CC test/nvme/boot_partition/boot_partition.o 00:05:45.093 CXX test/cpp_headers/histogram_data.o 00:05:45.093 CC app/fio/bdev/fio_plugin.o 00:05:45.093 CC test/bdev/bdevio/bdevio.o 00:05:45.093 CC test/nvme/compliance/nvme_compliance.o 00:05:45.093 CC examples/nvme/arbitration/arbitration.o 00:05:45.353 CC test/nvme/fused_ordering/fused_ordering.o 00:05:45.353 LINK boot_partition 00:05:45.353 LINK connect_stress 00:05:45.353 CXX test/cpp_headers/idxd.o 00:05:45.353 LINK fused_ordering 00:05:45.613 LINK bdevperf 00:05:45.613 CXX test/cpp_headers/idxd_spec.o 00:05:45.613 LINK arbitration 00:05:45.613 LINK bdevio 00:05:45.613 LINK nvme_compliance 00:05:45.613 CC examples/nvme/hotplug/hotplug.o 00:05:45.613 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:45.613 LINK spdk_bdev 00:05:45.613 CXX test/cpp_headers/init.o 00:05:45.872 CXX test/cpp_headers/ioat.o 00:05:45.872 CC test/nvme/fdp/fdp.o 00:05:45.872 CXX test/cpp_headers/ioat_spec.o 00:05:45.872 LINK doorbell_aers 00:05:45.872 CC test/nvme/cuse/cuse.o 00:05:45.872 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:45.872 LINK hotplug 00:05:45.872 CC examples/nvme/abort/abort.o 00:05:45.872 CXX test/cpp_headers/iscsi_spec.o 00:05:45.872 CXX test/cpp_headers/json.o 00:05:45.872 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:45.872 CXX test/cpp_headers/jsonrpc.o 00:05:46.131 CXX test/cpp_headers/keyring.o 00:05:46.131 LINK cmb_copy 00:05:46.131 CXX test/cpp_headers/keyring_module.o 00:05:46.131 CXX test/cpp_headers/likely.o 00:05:46.131 LINK fdp 00:05:46.131 LINK pmr_persistence 00:05:46.131 CXX test/cpp_headers/log.o 00:05:46.131 CXX test/cpp_headers/lvol.o 00:05:46.131 LINK esnap 00:05:46.390 CXX test/cpp_headers/md5.o 00:05:46.390 CXX test/cpp_headers/memory.o 00:05:46.390 CXX test/cpp_headers/mmio.o 00:05:46.390 LINK abort 00:05:46.390 CXX test/cpp_headers/nbd.o 00:05:46.390 CXX test/cpp_headers/net.o 00:05:46.390 CXX test/cpp_headers/notify.o 00:05:46.390 CXX test/cpp_headers/nvme.o 00:05:46.390 CXX test/cpp_headers/nvme_intel.o 00:05:46.390 CXX test/cpp_headers/nvme_ocssd.o 00:05:46.390 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:46.390 CXX test/cpp_headers/nvme_spec.o 00:05:46.648 CXX test/cpp_headers/nvme_zns.o 00:05:46.648 CXX test/cpp_headers/nvmf_cmd.o 00:05:46.648 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:46.648 CXX test/cpp_headers/nvmf.o 00:05:46.648 CXX test/cpp_headers/nvmf_spec.o 00:05:46.648 CXX test/cpp_headers/nvmf_transport.o 00:05:46.648 CXX test/cpp_headers/opal.o 00:05:46.648 CC examples/nvmf/nvmf/nvmf.o 00:05:46.648 CXX test/cpp_headers/opal_spec.o 00:05:46.648 CXX test/cpp_headers/pci_ids.o 00:05:46.907 CXX test/cpp_headers/pipe.o 00:05:46.907 CXX test/cpp_headers/queue.o 00:05:46.907 CXX test/cpp_headers/reduce.o 00:05:46.907 CXX test/cpp_headers/rpc.o 00:05:46.907 CXX test/cpp_headers/scheduler.o 00:05:46.907 CXX test/cpp_headers/scsi.o 00:05:46.907 CXX test/cpp_headers/scsi_spec.o 00:05:46.907 CXX test/cpp_headers/sock.o 00:05:46.907 CXX test/cpp_headers/stdinc.o 00:05:46.907 CXX test/cpp_headers/string.o 00:05:46.907 CXX test/cpp_headers/thread.o 00:05:47.166 CXX test/cpp_headers/trace.o 00:05:47.166 CXX test/cpp_headers/trace_parser.o 00:05:47.166 LINK nvmf 00:05:47.166 CXX test/cpp_headers/tree.o 00:05:47.166 CXX test/cpp_headers/ublk.o 00:05:47.166 CXX test/cpp_headers/util.o 00:05:47.166 CXX test/cpp_headers/uuid.o 00:05:47.166 LINK cuse 00:05:47.166 CXX test/cpp_headers/version.o 00:05:47.166 CXX test/cpp_headers/vfio_user_pci.o 00:05:47.166 CXX test/cpp_headers/vfio_user_spec.o 00:05:47.166 CXX test/cpp_headers/vhost.o 00:05:47.166 CXX test/cpp_headers/vmd.o 00:05:47.166 CXX test/cpp_headers/xor.o 00:05:47.425 CXX test/cpp_headers/zipf.o 00:05:47.425 00:05:47.425 real 1m34.913s 00:05:47.425 user 8m13.333s 00:05:47.425 sys 2m9.213s 00:05:47.425 20:33:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:47.425 20:33:42 make -- common/autotest_common.sh@10 -- $ set +x 00:05:47.425 ************************************ 00:05:47.425 END TEST make 00:05:47.425 ************************************ 00:05:47.684 20:33:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:47.684 20:33:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:47.684 20:33:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:47.684 20:33:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.684 20:33:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:47.684 20:33:42 -- pm/common@44 -- $ pid=5300 00:05:47.684 20:33:42 -- pm/common@50 -- $ kill -TERM 5300 00:05:47.684 20:33:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.684 20:33:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:47.684 20:33:42 -- pm/common@44 -- $ pid=5301 00:05:47.684 20:33:42 -- pm/common@50 -- $ kill -TERM 5301 00:05:47.684 20:33:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:47.684 20:33:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:47.684 20:33:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.684 20:33:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.684 20:33:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.684 20:33:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.684 20:33:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.684 20:33:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.684 20:33:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.684 20:33:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.684 20:33:42 -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.684 20:33:42 -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.684 20:33:42 -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.684 20:33:42 -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.684 20:33:42 -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.684 20:33:42 -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.684 20:33:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.684 20:33:42 -- scripts/common.sh@344 -- # case "$op" in 00:05:47.684 20:33:42 -- scripts/common.sh@345 -- # : 1 00:05:47.684 20:33:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.684 20:33:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.684 20:33:42 -- scripts/common.sh@365 -- # decimal 1 00:05:47.684 20:33:42 -- scripts/common.sh@353 -- # local d=1 00:05:47.684 20:33:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.684 20:33:42 -- scripts/common.sh@355 -- # echo 1 00:05:47.684 20:33:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.684 20:33:42 -- scripts/common.sh@366 -- # decimal 2 00:05:47.684 20:33:42 -- scripts/common.sh@353 -- # local d=2 00:05:47.684 20:33:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.684 20:33:42 -- scripts/common.sh@355 -- # echo 2 00:05:47.684 20:33:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.684 20:33:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.684 20:33:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.684 20:33:42 -- scripts/common.sh@368 -- # return 0 00:05:47.684 20:33:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.684 20:33:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.684 --rc genhtml_branch_coverage=1 00:05:47.684 --rc genhtml_function_coverage=1 00:05:47.684 --rc genhtml_legend=1 00:05:47.684 --rc geninfo_all_blocks=1 00:05:47.684 --rc geninfo_unexecuted_blocks=1 00:05:47.684 00:05:47.684 ' 00:05:47.684 20:33:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.684 --rc genhtml_branch_coverage=1 00:05:47.684 --rc genhtml_function_coverage=1 00:05:47.684 --rc genhtml_legend=1 00:05:47.684 --rc geninfo_all_blocks=1 00:05:47.684 --rc geninfo_unexecuted_blocks=1 00:05:47.684 00:05:47.684 ' 00:05:47.684 20:33:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.684 --rc genhtml_branch_coverage=1 00:05:47.684 --rc genhtml_function_coverage=1 00:05:47.684 --rc genhtml_legend=1 00:05:47.684 --rc geninfo_all_blocks=1 00:05:47.684 --rc geninfo_unexecuted_blocks=1 00:05:47.684 00:05:47.684 ' 00:05:47.684 20:33:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.684 --rc genhtml_branch_coverage=1 00:05:47.684 --rc genhtml_function_coverage=1 00:05:47.684 --rc genhtml_legend=1 00:05:47.684 --rc geninfo_all_blocks=1 00:05:47.684 --rc geninfo_unexecuted_blocks=1 00:05:47.684 00:05:47.684 ' 00:05:47.684 20:33:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:47.684 20:33:42 -- nvmf/common.sh@7 -- # uname -s 00:05:47.684 20:33:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.684 20:33:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.684 20:33:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.684 20:33:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.684 20:33:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.684 20:33:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.684 20:33:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.684 20:33:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.684 20:33:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.684 20:33:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.684 20:33:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:05:47.684 20:33:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:05:47.684 20:33:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.684 20:33:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.684 20:33:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:47.684 20:33:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.684 20:33:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:47.684 20:33:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.684 20:33:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.684 20:33:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.684 20:33:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.684 20:33:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.684 20:33:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.684 20:33:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.684 20:33:42 -- paths/export.sh@5 -- # export PATH 00:05:47.684 20:33:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.684 20:33:42 -- nvmf/common.sh@51 -- # : 0 00:05:47.684 20:33:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.684 20:33:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.684 20:33:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.684 20:33:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.684 20:33:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.684 20:33:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.684 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.684 20:33:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.684 20:33:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.684 20:33:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.684 20:33:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:47.684 20:33:42 -- spdk/autotest.sh@32 -- # uname -s 00:05:47.684 20:33:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:47.684 20:33:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:47.684 20:33:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:47.684 20:33:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:47.684 20:33:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:47.685 20:33:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:47.943 20:33:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:47.943 20:33:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:47.943 20:33:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54471 00:05:47.943 20:33:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:47.943 20:33:42 -- pm/common@17 -- # local monitor 00:05:47.943 20:33:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.943 20:33:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.943 20:33:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:47.943 20:33:42 -- pm/common@25 -- # sleep 1 00:05:47.943 20:33:42 -- pm/common@21 -- # date +%s 00:05:47.943 20:33:42 -- pm/common@21 -- # date +%s 00:05:47.943 20:33:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732653222 00:05:47.943 20:33:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732653222 00:05:47.943 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732653222_collect-vmstat.pm.log 00:05:47.943 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732653222_collect-cpu-load.pm.log 00:05:48.879 20:33:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:48.879 20:33:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:48.879 20:33:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.879 20:33:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 20:33:43 -- spdk/autotest.sh@59 -- # create_test_list 00:05:48.879 20:33:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:48.879 20:33:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 20:33:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:48.879 20:33:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:48.879 20:33:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:48.879 20:33:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:48.879 20:33:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:48.879 20:33:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:48.879 20:33:43 -- common/autotest_common.sh@1457 -- # uname 00:05:48.879 20:33:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:48.879 20:33:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:48.879 20:33:43 -- common/autotest_common.sh@1477 -- # uname 00:05:48.879 20:33:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:48.879 20:33:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:48.879 20:33:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:48.879 lcov: LCOV version 1.15 00:05:48.879 20:33:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:07.048 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:07.048 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:25.130 20:34:19 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:25.130 20:34:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.130 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:25.130 20:34:19 -- spdk/autotest.sh@78 -- # rm -f 00:06:25.130 20:34:19 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:25.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:25.388 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:25.388 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:25.388 20:34:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:25.388 20:34:20 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:25.388 20:34:20 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:25.388 20:34:20 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:25.388 20:34:20 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:25.388 20:34:20 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:25.388 20:34:20 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:25.388 20:34:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:25.388 20:34:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.388 20:34:20 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:25.388 20:34:20 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:25.388 20:34:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:25.388 20:34:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:25.388 20:34:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.388 20:34:20 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:25.388 20:34:20 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:25.388 20:34:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:25.388 20:34:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:25.388 20:34:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.388 20:34:20 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:25.388 20:34:20 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:25.388 20:34:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:25.388 20:34:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:25.388 20:34:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.388 20:34:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:25.388 20:34:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.388 20:34:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.388 20:34:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:25.388 20:34:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:25.388 20:34:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:25.388 No valid GPT data, bailing 00:06:25.388 20:34:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:25.388 20:34:20 -- scripts/common.sh@394 -- # pt= 00:06:25.388 20:34:20 -- scripts/common.sh@395 -- # return 1 00:06:25.388 20:34:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:25.388 1+0 records in 00:06:25.388 1+0 records out 00:06:25.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555211 s, 189 MB/s 00:06:25.388 20:34:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.388 20:34:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.388 20:34:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:25.388 20:34:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:25.388 20:34:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:25.388 No valid GPT data, bailing 00:06:25.388 20:34:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:25.388 20:34:20 -- scripts/common.sh@394 -- # pt= 00:06:25.388 20:34:20 -- scripts/common.sh@395 -- # return 1 00:06:25.388 20:34:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:25.388 1+0 records in 00:06:25.388 1+0 records out 00:06:25.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600909 s, 174 MB/s 00:06:25.388 20:34:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.388 20:34:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.388 20:34:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:25.388 20:34:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:25.388 20:34:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:25.646 No valid GPT data, bailing 00:06:25.646 20:34:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:25.646 20:34:20 -- scripts/common.sh@394 -- # pt= 00:06:25.646 20:34:20 -- scripts/common.sh@395 -- # return 1 00:06:25.646 20:34:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:25.646 1+0 records in 00:06:25.646 1+0 records out 00:06:25.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537042 s, 195 MB/s 00:06:25.646 20:34:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.646 20:34:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.646 20:34:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:25.646 20:34:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:25.646 20:34:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:25.646 No valid GPT data, bailing 00:06:25.646 20:34:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:25.646 20:34:20 -- scripts/common.sh@394 -- # pt= 00:06:25.646 20:34:20 -- scripts/common.sh@395 -- # return 1 00:06:25.646 20:34:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:25.646 1+0 records in 00:06:25.646 1+0 records out 00:06:25.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639685 s, 164 MB/s 00:06:25.646 20:34:20 -- spdk/autotest.sh@105 -- # sync 00:06:25.646 20:34:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:25.646 20:34:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:25.646 20:34:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:28.179 20:34:22 -- spdk/autotest.sh@111 -- # uname -s 00:06:28.179 20:34:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:28.179 20:34:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:28.179 20:34:22 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:28.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:28.746 Hugepages 00:06:28.746 node hugesize free / total 00:06:28.746 node0 1048576kB 0 / 0 00:06:28.746 node0 2048kB 0 / 0 00:06:28.746 00:06:28.746 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:29.004 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:29.004 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:29.004 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:29.004 20:34:23 -- spdk/autotest.sh@117 -- # uname -s 00:06:29.004 20:34:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:29.004 20:34:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:29.004 20:34:23 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:29.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.942 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:30.200 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:30.200 20:34:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:31.134 20:34:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:31.134 20:34:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:31.134 20:34:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:31.134 20:34:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:31.134 20:34:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:31.134 20:34:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:31.134 20:34:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:31.134 20:34:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:31.134 20:34:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:31.134 20:34:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:31.134 20:34:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:31.134 20:34:26 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:31.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:31.700 Waiting for block devices as requested 00:06:31.700 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:31.958 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:31.958 20:34:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:31.958 20:34:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:31.958 20:34:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:31.958 20:34:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:31.958 20:34:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:31.958 20:34:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:31.958 20:34:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:31.958 20:34:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:31.958 20:34:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:31.958 20:34:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:31.958 20:34:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:31.958 20:34:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:31.958 20:34:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:31.958 20:34:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:31.958 20:34:26 -- common/autotest_common.sh@1543 -- # continue 00:06:31.958 20:34:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:31.958 20:34:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:31.958 20:34:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:31.958 20:34:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:31.958 20:34:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:31.958 20:34:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:32.235 20:34:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:32.235 20:34:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:32.235 20:34:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:32.235 20:34:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:32.235 20:34:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:32.235 20:34:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:32.235 20:34:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:32.235 20:34:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:32.235 20:34:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:32.235 20:34:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:32.235 20:34:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:32.235 20:34:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:32.235 20:34:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:32.235 20:34:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:32.235 20:34:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:32.235 20:34:26 -- common/autotest_common.sh@1543 -- # continue 00:06:32.235 20:34:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:32.235 20:34:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.235 20:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:32.235 20:34:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:32.235 20:34:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.235 20:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.235 20:34:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:32.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:33.057 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.057 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.057 20:34:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:33.316 20:34:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.316 20:34:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.316 20:34:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:33.316 20:34:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:33.316 20:34:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:33.316 20:34:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:33.316 20:34:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:33.316 20:34:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:33.316 20:34:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:33.316 20:34:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:33.316 20:34:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:33.316 20:34:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:33.316 20:34:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:33.316 20:34:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:33.316 20:34:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:33.316 20:34:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:33.316 20:34:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:33.316 20:34:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:33.316 20:34:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:33.316 20:34:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:33.316 20:34:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:33.316 20:34:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:33.316 20:34:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:33.316 20:34:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:33.316 20:34:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:33.316 20:34:28 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:33.316 20:34:28 -- common/autotest_common.sh@1572 -- # return 0 00:06:33.316 20:34:28 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:33.316 20:34:28 -- common/autotest_common.sh@1580 -- # return 0 00:06:33.316 20:34:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:33.316 20:34:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:33.316 20:34:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:33.316 20:34:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:33.316 20:34:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:33.316 20:34:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.316 20:34:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.316 20:34:28 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:33.316 20:34:28 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:33.316 20:34:28 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:33.316 20:34:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:33.316 20:34:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.316 20:34:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.316 20:34:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.316 ************************************ 00:06:33.316 START TEST env 00:06:33.316 ************************************ 00:06:33.316 20:34:28 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:33.574 * Looking for test storage... 00:06:33.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.574 20:34:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.574 20:34:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.574 20:34:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.574 20:34:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.574 20:34:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.574 20:34:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.574 20:34:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.574 20:34:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.574 20:34:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.574 20:34:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.574 20:34:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.574 20:34:28 env -- scripts/common.sh@344 -- # case "$op" in 00:06:33.574 20:34:28 env -- scripts/common.sh@345 -- # : 1 00:06:33.574 20:34:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.574 20:34:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.574 20:34:28 env -- scripts/common.sh@365 -- # decimal 1 00:06:33.574 20:34:28 env -- scripts/common.sh@353 -- # local d=1 00:06:33.574 20:34:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.574 20:34:28 env -- scripts/common.sh@355 -- # echo 1 00:06:33.574 20:34:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.574 20:34:28 env -- scripts/common.sh@366 -- # decimal 2 00:06:33.574 20:34:28 env -- scripts/common.sh@353 -- # local d=2 00:06:33.574 20:34:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.574 20:34:28 env -- scripts/common.sh@355 -- # echo 2 00:06:33.574 20:34:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.574 20:34:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.574 20:34:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.574 20:34:28 env -- scripts/common.sh@368 -- # return 0 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.574 --rc genhtml_branch_coverage=1 00:06:33.574 --rc genhtml_function_coverage=1 00:06:33.574 --rc genhtml_legend=1 00:06:33.574 --rc geninfo_all_blocks=1 00:06:33.574 --rc geninfo_unexecuted_blocks=1 00:06:33.574 00:06:33.574 ' 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.574 --rc genhtml_branch_coverage=1 00:06:33.574 --rc genhtml_function_coverage=1 00:06:33.574 --rc genhtml_legend=1 00:06:33.574 --rc geninfo_all_blocks=1 00:06:33.574 --rc geninfo_unexecuted_blocks=1 00:06:33.574 00:06:33.574 ' 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.574 --rc genhtml_branch_coverage=1 00:06:33.574 --rc genhtml_function_coverage=1 00:06:33.574 --rc genhtml_legend=1 00:06:33.574 --rc geninfo_all_blocks=1 00:06:33.574 --rc geninfo_unexecuted_blocks=1 00:06:33.574 00:06:33.574 ' 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.574 --rc genhtml_branch_coverage=1 00:06:33.574 --rc genhtml_function_coverage=1 00:06:33.574 --rc genhtml_legend=1 00:06:33.574 --rc geninfo_all_blocks=1 00:06:33.574 --rc geninfo_unexecuted_blocks=1 00:06:33.574 00:06:33.574 ' 00:06:33.574 20:34:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.574 20:34:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.574 20:34:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.574 ************************************ 00:06:33.574 START TEST env_memory 00:06:33.574 ************************************ 00:06:33.575 20:34:28 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:33.575 00:06:33.575 00:06:33.575 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.575 http://cunit.sourceforge.net/ 00:06:33.575 00:06:33.575 00:06:33.575 Suite: memory 00:06:33.575 Test: alloc and free memory map ...[2024-11-26 20:34:28.470872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:33.575 passed 00:06:33.575 Test: mem map translation ...[2024-11-26 20:34:28.505988] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:33.575 [2024-11-26 20:34:28.506302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:33.575 [2024-11-26 20:34:28.506465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:33.575 [2024-11-26 20:34:28.506578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:33.575 passed 00:06:33.833 Test: mem map registration ...[2024-11-26 20:34:28.570801] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:33.833 [2024-11-26 20:34:28.571112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:33.833 passed 00:06:33.833 Test: mem map adjacent registrations ...passed 00:06:33.833 00:06:33.833 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.833 suites 1 1 n/a 0 0 00:06:33.833 tests 4 4 4 0 0 00:06:33.833 asserts 152 152 152 0 n/a 00:06:33.833 00:06:33.833 Elapsed time = 0.219 seconds 00:06:33.833 00:06:33.833 real 0m0.241s 00:06:33.833 user 0m0.218s 00:06:33.833 sys 0m0.017s 00:06:33.833 20:34:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.833 20:34:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:33.833 ************************************ 00:06:33.833 END TEST env_memory 00:06:33.833 ************************************ 00:06:33.833 20:34:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:33.833 20:34:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.833 20:34:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.833 20:34:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.833 ************************************ 00:06:33.833 START TEST env_vtophys 00:06:33.833 ************************************ 00:06:33.833 20:34:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:33.833 EAL: lib.eal log level changed from notice to debug 00:06:33.833 EAL: Detected lcore 0 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 1 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 2 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 3 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 4 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 5 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 6 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 7 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 8 as core 0 on socket 0 00:06:33.833 EAL: Detected lcore 9 as core 0 on socket 0 00:06:33.833 EAL: Maximum logical cores by configuration: 128 00:06:33.833 EAL: Detected CPU lcores: 10 00:06:33.833 EAL: Detected NUMA nodes: 1 00:06:33.833 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:33.833 EAL: Detected shared linkage of DPDK 00:06:33.833 EAL: No shared files mode enabled, IPC will be disabled 00:06:33.833 EAL: Selected IOVA mode 'PA' 00:06:33.833 EAL: Probing VFIO support... 00:06:33.833 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:33.833 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:33.833 EAL: Ask a virtual area of 0x2e000 bytes 00:06:33.833 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:33.833 EAL: Setting up physically contiguous memory... 00:06:33.833 EAL: Setting maximum number of open files to 524288 00:06:33.833 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:33.833 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:33.833 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.833 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:33.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.833 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.833 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:33.833 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:33.833 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.833 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:33.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.833 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.833 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:33.833 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:33.833 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.833 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:33.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.833 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.833 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:33.833 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:33.833 EAL: Ask a virtual area of 0x61000 bytes 00:06:33.833 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:33.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:33.833 EAL: Ask a virtual area of 0x400000000 bytes 00:06:33.833 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:33.833 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:33.833 EAL: Hugepages will be freed exactly as allocated. 00:06:33.833 EAL: No shared files mode enabled, IPC is disabled 00:06:33.833 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: TSC frequency is ~2100000 KHz 00:06:34.091 EAL: Main lcore 0 is ready (tid=7f143000fa00;cpuset=[0]) 00:06:34.091 EAL: Trying to obtain current memory policy. 00:06:34.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.091 EAL: Restoring previous memory policy: 0 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was expanded by 2MB 00:06:34.091 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:34.091 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:34.091 EAL: Mem event callback 'spdk:(nil)' registered 00:06:34.091 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:34.091 00:06:34.091 00:06:34.091 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.091 http://cunit.sourceforge.net/ 00:06:34.091 00:06:34.091 00:06:34.091 Suite: components_suite 00:06:34.091 Test: vtophys_malloc_test ...passed 00:06:34.091 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:34.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.091 EAL: Restoring previous memory policy: 4 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was expanded by 4MB 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was shrunk by 4MB 00:06:34.091 EAL: Trying to obtain current memory policy. 00:06:34.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.091 EAL: Restoring previous memory policy: 4 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was expanded by 6MB 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was shrunk by 6MB 00:06:34.091 EAL: Trying to obtain current memory policy. 00:06:34.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.091 EAL: Restoring previous memory policy: 4 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was expanded by 10MB 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was shrunk by 10MB 00:06:34.091 EAL: Trying to obtain current memory policy. 00:06:34.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.091 EAL: Restoring previous memory policy: 4 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.091 EAL: request: mp_malloc_sync 00:06:34.091 EAL: No shared files mode enabled, IPC is disabled 00:06:34.091 EAL: Heap on socket 0 was expanded by 18MB 00:06:34.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.092 EAL: request: mp_malloc_sync 00:06:34.092 EAL: No shared files mode enabled, IPC is disabled 00:06:34.092 EAL: Heap on socket 0 was shrunk by 18MB 00:06:34.092 EAL: Trying to obtain current memory policy. 00:06:34.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.092 EAL: Restoring previous memory policy: 4 00:06:34.092 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.092 EAL: request: mp_malloc_sync 00:06:34.092 EAL: No shared files mode enabled, IPC is disabled 00:06:34.092 EAL: Heap on socket 0 was expanded by 34MB 00:06:34.092 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.092 EAL: request: mp_malloc_sync 00:06:34.092 EAL: No shared files mode enabled, IPC is disabled 00:06:34.092 EAL: Heap on socket 0 was shrunk by 34MB 00:06:34.092 EAL: Trying to obtain current memory policy. 00:06:34.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.092 EAL: Restoring previous memory policy: 4 00:06:34.092 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.092 EAL: request: mp_malloc_sync 00:06:34.092 EAL: No shared files mode enabled, IPC is disabled 00:06:34.092 EAL: Heap on socket 0 was expanded by 66MB 00:06:34.092 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.092 EAL: request: mp_malloc_sync 00:06:34.092 EAL: No shared files mode enabled, IPC is disabled 00:06:34.092 EAL: Heap on socket 0 was shrunk by 66MB 00:06:34.092 EAL: Trying to obtain current memory policy. 00:06:34.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.092 EAL: Restoring previous memory policy: 4 00:06:34.092 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.092 EAL: request: mp_malloc_sync 00:06:34.092 EAL: No shared files mode enabled, IPC is disabled 00:06:34.092 EAL: Heap on socket 0 was expanded by 130MB 00:06:34.092 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.092 EAL: request: mp_malloc_sync 00:06:34.092 EAL: No shared files mode enabled, IPC is disabled 00:06:34.092 EAL: Heap on socket 0 was shrunk by 130MB 00:06:34.092 EAL: Trying to obtain current memory policy. 00:06:34.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.350 EAL: Restoring previous memory policy: 4 00:06:34.350 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.350 EAL: request: mp_malloc_sync 00:06:34.350 EAL: No shared files mode enabled, IPC is disabled 00:06:34.350 EAL: Heap on socket 0 was expanded by 258MB 00:06:34.350 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.350 EAL: request: mp_malloc_sync 00:06:34.350 EAL: No shared files mode enabled, IPC is disabled 00:06:34.350 EAL: Heap on socket 0 was shrunk by 258MB 00:06:34.350 EAL: Trying to obtain current memory policy. 00:06:34.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.607 EAL: Restoring previous memory policy: 4 00:06:34.607 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.607 EAL: request: mp_malloc_sync 00:06:34.607 EAL: No shared files mode enabled, IPC is disabled 00:06:34.607 EAL: Heap on socket 0 was expanded by 514MB 00:06:34.607 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.607 EAL: request: mp_malloc_sync 00:06:34.607 EAL: No shared files mode enabled, IPC is disabled 00:06:34.607 EAL: Heap on socket 0 was shrunk by 514MB 00:06:34.607 EAL: Trying to obtain current memory policy. 00:06:34.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.865 EAL: Restoring previous memory policy: 4 00:06:34.865 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.865 EAL: request: mp_malloc_sync 00:06:34.865 EAL: No shared files mode enabled, IPC is disabled 00:06:34.865 EAL: Heap on socket 0 was expanded by 1026MB 00:06:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.122 passed 00:06:35.122 00:06:35.122 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.122 suites 1 1 n/a 0 0 00:06:35.122 tests 2 2 2 0 0 00:06:35.122 asserts 5470 5470 5470 0 n/a 00:06:35.122 00:06:35.122 Elapsed time = 1.132 seconds 00:06:35.122 EAL: request: mp_malloc_sync 00:06:35.122 EAL: No shared files mode enabled, IPC is disabled 00:06:35.122 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.122 EAL: request: mp_malloc_sync 00:06:35.122 EAL: No shared files mode enabled, IPC is disabled 00:06:35.122 EAL: Heap on socket 0 was shrunk by 2MB 00:06:35.122 EAL: No shared files mode enabled, IPC is disabled 00:06:35.122 EAL: No shared files mode enabled, IPC is disabled 00:06:35.122 EAL: No shared files mode enabled, IPC is disabled 00:06:35.122 00:06:35.122 real 0m1.345s 00:06:35.122 user 0m0.728s 00:06:35.122 sys 0m0.481s 00:06:35.122 20:34:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.122 20:34:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:35.122 ************************************ 00:06:35.122 END TEST env_vtophys 00:06:35.122 ************************************ 00:06:35.378 20:34:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:35.378 20:34:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.378 20:34:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.378 20:34:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.378 ************************************ 00:06:35.378 START TEST env_pci 00:06:35.378 ************************************ 00:06:35.378 20:34:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:35.378 00:06:35.378 00:06:35.378 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.378 http://cunit.sourceforge.net/ 00:06:35.378 00:06:35.378 00:06:35.378 Suite: pci 00:06:35.378 Test: pci_hook ...[2024-11-26 20:34:30.146322] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56768 has claimed it 00:06:35.378 passed 00:06:35.378 00:06:35.378 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.378 suites 1 1 n/a 0 0 00:06:35.378 tests 1 1 1 0 0 00:06:35.378 asserts 25 25 25 0 n/a 00:06:35.378 00:06:35.378 Elapsed time = 0.002 seconds 00:06:35.378 EAL: Cannot find device (10000:00:01.0) 00:06:35.378 EAL: Failed to attach device on primary process 00:06:35.378 00:06:35.378 real 0m0.025s 00:06:35.378 user 0m0.012s 00:06:35.378 sys 0m0.013s 00:06:35.378 20:34:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.378 20:34:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:35.378 ************************************ 00:06:35.378 END TEST env_pci 00:06:35.378 ************************************ 00:06:35.378 20:34:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:35.378 20:34:30 env -- env/env.sh@15 -- # uname 00:06:35.378 20:34:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:35.378 20:34:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:35.378 20:34:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:35.378 20:34:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:35.378 20:34:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.378 20:34:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.378 ************************************ 00:06:35.378 START TEST env_dpdk_post_init 00:06:35.378 ************************************ 00:06:35.378 20:34:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:35.378 EAL: Detected CPU lcores: 10 00:06:35.378 EAL: Detected NUMA nodes: 1 00:06:35.378 EAL: Detected shared linkage of DPDK 00:06:35.378 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:35.378 EAL: Selected IOVA mode 'PA' 00:06:35.636 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:35.636 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:35.636 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:35.636 Starting DPDK initialization... 00:06:35.636 Starting SPDK post initialization... 00:06:35.636 SPDK NVMe probe 00:06:35.636 Attaching to 0000:00:10.0 00:06:35.636 Attaching to 0000:00:11.0 00:06:35.636 Attached to 0000:00:10.0 00:06:35.636 Attached to 0000:00:11.0 00:06:35.636 Cleaning up... 00:06:35.636 00:06:35.636 real 0m0.202s 00:06:35.636 user 0m0.058s 00:06:35.636 sys 0m0.044s 00:06:35.636 20:34:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.636 20:34:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 ************************************ 00:06:35.636 END TEST env_dpdk_post_init 00:06:35.636 ************************************ 00:06:35.636 20:34:30 env -- env/env.sh@26 -- # uname 00:06:35.636 20:34:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:35.636 20:34:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:35.636 20:34:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.636 20:34:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.636 20:34:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 ************************************ 00:06:35.636 START TEST env_mem_callbacks 00:06:35.636 ************************************ 00:06:35.636 20:34:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:35.636 EAL: Detected CPU lcores: 10 00:06:35.636 EAL: Detected NUMA nodes: 1 00:06:35.636 EAL: Detected shared linkage of DPDK 00:06:35.636 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:35.636 EAL: Selected IOVA mode 'PA' 00:06:35.894 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:35.894 00:06:35.894 00:06:35.894 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.894 http://cunit.sourceforge.net/ 00:06:35.894 00:06:35.894 00:06:35.894 Suite: memory 00:06:35.894 Test: test ... 00:06:35.895 register 0x200000200000 2097152 00:06:35.895 malloc 3145728 00:06:35.895 register 0x200000400000 4194304 00:06:35.895 buf 0x200000500000 len 3145728 PASSED 00:06:35.895 malloc 64 00:06:35.895 buf 0x2000004fff40 len 64 PASSED 00:06:35.895 malloc 4194304 00:06:35.895 register 0x200000800000 6291456 00:06:35.895 buf 0x200000a00000 len 4194304 PASSED 00:06:35.895 free 0x200000500000 3145728 00:06:35.895 free 0x2000004fff40 64 00:06:35.895 unregister 0x200000400000 4194304 PASSED 00:06:35.895 free 0x200000a00000 4194304 00:06:35.895 unregister 0x200000800000 6291456 PASSED 00:06:35.895 malloc 8388608 00:06:35.895 register 0x200000400000 10485760 00:06:35.895 buf 0x200000600000 len 8388608 PASSED 00:06:35.895 free 0x200000600000 8388608 00:06:35.895 unregister 0x200000400000 10485760 PASSED 00:06:35.895 passed 00:06:35.895 00:06:35.895 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.895 suites 1 1 n/a 0 0 00:06:35.895 tests 1 1 1 0 0 00:06:35.895 asserts 15 15 15 0 n/a 00:06:35.895 00:06:35.895 Elapsed time = 0.009 seconds 00:06:35.895 00:06:35.895 real 0m0.147s 00:06:35.895 user 0m0.015s 00:06:35.895 sys 0m0.032s 00:06:35.895 20:34:30 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.895 20:34:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:35.895 ************************************ 00:06:35.895 END TEST env_mem_callbacks 00:06:35.895 ************************************ 00:06:35.895 00:06:35.895 real 0m2.474s 00:06:35.895 user 0m1.241s 00:06:35.895 sys 0m0.899s 00:06:35.895 20:34:30 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.895 20:34:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:35.895 ************************************ 00:06:35.895 END TEST env 00:06:35.895 ************************************ 00:06:35.895 20:34:30 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:35.895 20:34:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.895 20:34:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.895 20:34:30 -- common/autotest_common.sh@10 -- # set +x 00:06:35.895 ************************************ 00:06:35.895 START TEST rpc 00:06:35.895 ************************************ 00:06:35.895 20:34:30 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:35.895 * Looking for test storage... 00:06:35.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:35.895 20:34:30 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.895 20:34:30 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.895 20:34:30 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.154 20:34:30 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.154 20:34:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.154 20:34:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.154 20:34:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.154 20:34:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.154 20:34:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.154 20:34:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.154 20:34:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.154 20:34:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:36.154 20:34:30 rpc -- scripts/common.sh@345 -- # : 1 00:06:36.154 20:34:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.154 20:34:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.154 20:34:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:36.154 20:34:30 rpc -- scripts/common.sh@353 -- # local d=1 00:06:36.154 20:34:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.154 20:34:30 rpc -- scripts/common.sh@355 -- # echo 1 00:06:36.154 20:34:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.154 20:34:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@353 -- # local d=2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.154 20:34:30 rpc -- scripts/common.sh@355 -- # echo 2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.154 20:34:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.154 20:34:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.154 20:34:30 rpc -- scripts/common.sh@368 -- # return 0 00:06:36.154 20:34:30 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.154 20:34:30 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.154 --rc genhtml_branch_coverage=1 00:06:36.154 --rc genhtml_function_coverage=1 00:06:36.154 --rc genhtml_legend=1 00:06:36.154 --rc geninfo_all_blocks=1 00:06:36.154 --rc geninfo_unexecuted_blocks=1 00:06:36.154 00:06:36.154 ' 00:06:36.154 20:34:30 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.154 --rc genhtml_branch_coverage=1 00:06:36.154 --rc genhtml_function_coverage=1 00:06:36.154 --rc genhtml_legend=1 00:06:36.154 --rc geninfo_all_blocks=1 00:06:36.154 --rc geninfo_unexecuted_blocks=1 00:06:36.154 00:06:36.154 ' 00:06:36.154 20:34:30 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.154 --rc genhtml_branch_coverage=1 00:06:36.154 --rc genhtml_function_coverage=1 00:06:36.155 --rc genhtml_legend=1 00:06:36.155 --rc geninfo_all_blocks=1 00:06:36.155 --rc geninfo_unexecuted_blocks=1 00:06:36.155 00:06:36.155 ' 00:06:36.155 20:34:30 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.155 --rc genhtml_branch_coverage=1 00:06:36.155 --rc genhtml_function_coverage=1 00:06:36.155 --rc genhtml_legend=1 00:06:36.155 --rc geninfo_all_blocks=1 00:06:36.155 --rc geninfo_unexecuted_blocks=1 00:06:36.155 00:06:36.155 ' 00:06:36.155 20:34:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56885 00:06:36.155 20:34:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.155 20:34:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56885 00:06:36.155 20:34:30 rpc -- common/autotest_common.sh@835 -- # '[' -z 56885 ']' 00:06:36.155 20:34:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:36.155 20:34:30 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.155 20:34:30 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.155 20:34:30 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.155 20:34:30 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.155 20:34:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.155 [2024-11-26 20:34:31.014489] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:36.155 [2024-11-26 20:34:31.014589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56885 ] 00:06:36.414 [2024-11-26 20:34:31.159888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.414 [2024-11-26 20:34:31.216320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:36.414 [2024-11-26 20:34:31.216387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56885' to capture a snapshot of events at runtime. 00:06:36.414 [2024-11-26 20:34:31.216399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.414 [2024-11-26 20:34:31.216410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.414 [2024-11-26 20:34:31.216418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56885 for offline analysis/debug. 00:06:36.414 [2024-11-26 20:34:31.216763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.414 [2024-11-26 20:34:31.320127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.673 20:34:31 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.673 20:34:31 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.673 20:34:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.673 20:34:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.673 20:34:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:36.673 20:34:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:36.673 20:34:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.673 20:34:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.673 20:34:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.673 ************************************ 00:06:36.673 START TEST rpc_integrity 00:06:36.673 ************************************ 00:06:36.673 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:36.673 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:36.673 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.673 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.673 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.673 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:36.673 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.934 { 00:06:36.934 "name": "Malloc0", 00:06:36.934 "aliases": [ 00:06:36.934 "4e5fc274-2e21-4653-900e-9f92158cc6bb" 00:06:36.934 ], 00:06:36.934 "product_name": "Malloc disk", 00:06:36.934 "block_size": 512, 00:06:36.934 "num_blocks": 16384, 00:06:36.934 "uuid": "4e5fc274-2e21-4653-900e-9f92158cc6bb", 00:06:36.934 "assigned_rate_limits": { 00:06:36.934 "rw_ios_per_sec": 0, 00:06:36.934 "rw_mbytes_per_sec": 0, 00:06:36.934 "r_mbytes_per_sec": 0, 00:06:36.934 "w_mbytes_per_sec": 0 00:06:36.934 }, 00:06:36.934 "claimed": false, 00:06:36.934 "zoned": false, 00:06:36.934 "supported_io_types": { 00:06:36.934 "read": true, 00:06:36.934 "write": true, 00:06:36.934 "unmap": true, 00:06:36.934 "flush": true, 00:06:36.934 "reset": true, 00:06:36.934 "nvme_admin": false, 00:06:36.934 "nvme_io": false, 00:06:36.934 "nvme_io_md": false, 00:06:36.934 "write_zeroes": true, 00:06:36.934 "zcopy": true, 00:06:36.934 "get_zone_info": false, 00:06:36.934 "zone_management": false, 00:06:36.934 "zone_append": false, 00:06:36.934 "compare": false, 00:06:36.934 "compare_and_write": false, 00:06:36.934 "abort": true, 00:06:36.934 "seek_hole": false, 00:06:36.934 "seek_data": false, 00:06:36.934 "copy": true, 00:06:36.934 "nvme_iov_md": false 00:06:36.934 }, 00:06:36.934 "memory_domains": [ 00:06:36.934 { 00:06:36.934 "dma_device_id": "system", 00:06:36.934 "dma_device_type": 1 00:06:36.934 }, 00:06:36.934 { 00:06:36.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.934 "dma_device_type": 2 00:06:36.934 } 00:06:36.934 ], 00:06:36.934 "driver_specific": {} 00:06:36.934 } 00:06:36.934 ]' 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.934 [2024-11-26 20:34:31.765214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:36.934 [2024-11-26 20:34:31.765282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.934 [2024-11-26 20:34:31.765303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x102d050 00:06:36.934 [2024-11-26 20:34:31.765314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.934 [2024-11-26 20:34:31.767243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.934 [2024-11-26 20:34:31.767280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.934 Passthru0 00:06:36.934 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.934 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:36.935 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.935 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.935 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.935 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:36.935 { 00:06:36.935 "name": "Malloc0", 00:06:36.935 "aliases": [ 00:06:36.935 "4e5fc274-2e21-4653-900e-9f92158cc6bb" 00:06:36.935 ], 00:06:36.935 "product_name": "Malloc disk", 00:06:36.935 "block_size": 512, 00:06:36.935 "num_blocks": 16384, 00:06:36.935 "uuid": "4e5fc274-2e21-4653-900e-9f92158cc6bb", 00:06:36.935 "assigned_rate_limits": { 00:06:36.935 "rw_ios_per_sec": 0, 00:06:36.935 "rw_mbytes_per_sec": 0, 00:06:36.935 "r_mbytes_per_sec": 0, 00:06:36.935 "w_mbytes_per_sec": 0 00:06:36.935 }, 00:06:36.935 "claimed": true, 00:06:36.935 "claim_type": "exclusive_write", 00:06:36.935 "zoned": false, 00:06:36.935 "supported_io_types": { 00:06:36.935 "read": true, 00:06:36.935 "write": true, 00:06:36.935 "unmap": true, 00:06:36.935 "flush": true, 00:06:36.935 "reset": true, 00:06:36.935 "nvme_admin": false, 00:06:36.935 "nvme_io": false, 00:06:36.935 "nvme_io_md": false, 00:06:36.935 "write_zeroes": true, 00:06:36.935 "zcopy": true, 00:06:36.935 "get_zone_info": false, 00:06:36.935 "zone_management": false, 00:06:36.935 "zone_append": false, 00:06:36.935 "compare": false, 00:06:36.935 "compare_and_write": false, 00:06:36.935 "abort": true, 00:06:36.935 "seek_hole": false, 00:06:36.935 "seek_data": false, 00:06:36.935 "copy": true, 00:06:36.935 "nvme_iov_md": false 00:06:36.935 }, 00:06:36.935 "memory_domains": [ 00:06:36.935 { 00:06:36.935 "dma_device_id": "system", 00:06:36.935 "dma_device_type": 1 00:06:36.935 }, 00:06:36.935 { 00:06:36.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.935 "dma_device_type": 2 00:06:36.935 } 00:06:36.935 ], 00:06:36.935 "driver_specific": {} 00:06:36.935 }, 00:06:36.935 { 00:06:36.935 "name": "Passthru0", 00:06:36.935 "aliases": [ 00:06:36.935 "59442598-c81b-5a0a-8f96-5d0c8ed8b33e" 00:06:36.935 ], 00:06:36.935 "product_name": "passthru", 00:06:36.935 "block_size": 512, 00:06:36.935 "num_blocks": 16384, 00:06:36.935 "uuid": "59442598-c81b-5a0a-8f96-5d0c8ed8b33e", 00:06:36.935 "assigned_rate_limits": { 00:06:36.935 "rw_ios_per_sec": 0, 00:06:36.935 "rw_mbytes_per_sec": 0, 00:06:36.935 "r_mbytes_per_sec": 0, 00:06:36.935 "w_mbytes_per_sec": 0 00:06:36.935 }, 00:06:36.935 "claimed": false, 00:06:36.935 "zoned": false, 00:06:36.935 "supported_io_types": { 00:06:36.935 "read": true, 00:06:36.935 "write": true, 00:06:36.935 "unmap": true, 00:06:36.935 "flush": true, 00:06:36.935 "reset": true, 00:06:36.935 "nvme_admin": false, 00:06:36.935 "nvme_io": false, 00:06:36.935 "nvme_io_md": false, 00:06:36.935 "write_zeroes": true, 00:06:36.935 "zcopy": true, 00:06:36.935 "get_zone_info": false, 00:06:36.935 "zone_management": false, 00:06:36.935 "zone_append": false, 00:06:36.935 "compare": false, 00:06:36.935 "compare_and_write": false, 00:06:36.935 "abort": true, 00:06:36.935 "seek_hole": false, 00:06:36.935 "seek_data": false, 00:06:36.935 "copy": true, 00:06:36.935 "nvme_iov_md": false 00:06:36.935 }, 00:06:36.935 "memory_domains": [ 00:06:36.935 { 00:06:36.935 "dma_device_id": "system", 00:06:36.935 "dma_device_type": 1 00:06:36.935 }, 00:06:36.935 { 00:06:36.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.935 "dma_device_type": 2 00:06:36.935 } 00:06:36.935 ], 00:06:36.935 "driver_specific": { 00:06:36.936 "passthru": { 00:06:36.936 "name": "Passthru0", 00:06:36.936 "base_bdev_name": "Malloc0" 00:06:36.936 } 00:06:36.936 } 00:06:36.936 } 00:06:36.936 ]' 00:06:36.936 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:36.936 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:36.936 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.936 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.936 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.936 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.936 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:36.936 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.194 20:34:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.194 00:06:37.194 real 0m0.325s 00:06:37.194 user 0m0.196s 00:06:37.194 sys 0m0.058s 00:06:37.194 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.194 20:34:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.194 ************************************ 00:06:37.194 END TEST rpc_integrity 00:06:37.194 ************************************ 00:06:37.194 20:34:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:37.194 20:34:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.194 20:34:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.194 20:34:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.194 ************************************ 00:06:37.194 START TEST rpc_plugins 00:06:37.194 ************************************ 00:06:37.194 20:34:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:37.194 20:34:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:37.194 20:34:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.194 20:34:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:37.194 { 00:06:37.194 "name": "Malloc1", 00:06:37.194 "aliases": [ 00:06:37.194 "70503c5e-c0b6-4a2d-97cc-e79e93c157be" 00:06:37.194 ], 00:06:37.194 "product_name": "Malloc disk", 00:06:37.194 "block_size": 4096, 00:06:37.194 "num_blocks": 256, 00:06:37.194 "uuid": "70503c5e-c0b6-4a2d-97cc-e79e93c157be", 00:06:37.194 "assigned_rate_limits": { 00:06:37.194 "rw_ios_per_sec": 0, 00:06:37.194 "rw_mbytes_per_sec": 0, 00:06:37.194 "r_mbytes_per_sec": 0, 00:06:37.194 "w_mbytes_per_sec": 0 00:06:37.194 }, 00:06:37.194 "claimed": false, 00:06:37.194 "zoned": false, 00:06:37.194 "supported_io_types": { 00:06:37.194 "read": true, 00:06:37.194 "write": true, 00:06:37.194 "unmap": true, 00:06:37.194 "flush": true, 00:06:37.194 "reset": true, 00:06:37.194 "nvme_admin": false, 00:06:37.194 "nvme_io": false, 00:06:37.194 "nvme_io_md": false, 00:06:37.194 "write_zeroes": true, 00:06:37.194 "zcopy": true, 00:06:37.194 "get_zone_info": false, 00:06:37.194 "zone_management": false, 00:06:37.194 "zone_append": false, 00:06:37.194 "compare": false, 00:06:37.194 "compare_and_write": false, 00:06:37.194 "abort": true, 00:06:37.194 "seek_hole": false, 00:06:37.194 "seek_data": false, 00:06:37.194 "copy": true, 00:06:37.194 "nvme_iov_md": false 00:06:37.194 }, 00:06:37.194 "memory_domains": [ 00:06:37.194 { 00:06:37.194 "dma_device_id": "system", 00:06:37.194 "dma_device_type": 1 00:06:37.194 }, 00:06:37.194 { 00:06:37.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.194 "dma_device_type": 2 00:06:37.194 } 00:06:37.194 ], 00:06:37.194 "driver_specific": {} 00:06:37.194 } 00:06:37.194 ]' 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:37.194 20:34:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:37.194 00:06:37.194 real 0m0.173s 00:06:37.194 user 0m0.100s 00:06:37.194 sys 0m0.030s 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.194 20:34:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.194 ************************************ 00:06:37.194 END TEST rpc_plugins 00:06:37.194 ************************************ 00:06:37.451 20:34:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:37.452 20:34:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.452 20:34:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.452 20:34:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.452 ************************************ 00:06:37.452 START TEST rpc_trace_cmd_test 00:06:37.452 ************************************ 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:37.452 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56885", 00:06:37.452 "tpoint_group_mask": "0x8", 00:06:37.452 "iscsi_conn": { 00:06:37.452 "mask": "0x2", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "scsi": { 00:06:37.452 "mask": "0x4", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "bdev": { 00:06:37.452 "mask": "0x8", 00:06:37.452 "tpoint_mask": "0xffffffffffffffff" 00:06:37.452 }, 00:06:37.452 "nvmf_rdma": { 00:06:37.452 "mask": "0x10", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "nvmf_tcp": { 00:06:37.452 "mask": "0x20", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "ftl": { 00:06:37.452 "mask": "0x40", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "blobfs": { 00:06:37.452 "mask": "0x80", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "dsa": { 00:06:37.452 "mask": "0x200", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "thread": { 00:06:37.452 "mask": "0x400", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "nvme_pcie": { 00:06:37.452 "mask": "0x800", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "iaa": { 00:06:37.452 "mask": "0x1000", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "nvme_tcp": { 00:06:37.452 "mask": "0x2000", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "bdev_nvme": { 00:06:37.452 "mask": "0x4000", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "sock": { 00:06:37.452 "mask": "0x8000", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "blob": { 00:06:37.452 "mask": "0x10000", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "bdev_raid": { 00:06:37.452 "mask": "0x20000", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 }, 00:06:37.452 "scheduler": { 00:06:37.452 "mask": "0x40000", 00:06:37.452 "tpoint_mask": "0x0" 00:06:37.452 } 00:06:37.452 }' 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:37.452 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:37.709 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:37.709 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:37.709 20:34:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:37.709 00:06:37.709 real 0m0.272s 00:06:37.709 user 0m0.219s 00:06:37.709 sys 0m0.042s 00:06:37.709 20:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.709 20:34:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.709 ************************************ 00:06:37.709 END TEST rpc_trace_cmd_test 00:06:37.709 ************************************ 00:06:37.709 20:34:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:37.709 20:34:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:37.709 20:34:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:37.709 20:34:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.710 20:34:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.710 20:34:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.710 ************************************ 00:06:37.710 START TEST rpc_daemon_integrity 00:06:37.710 ************************************ 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:37.710 { 00:06:37.710 "name": "Malloc2", 00:06:37.710 "aliases": [ 00:06:37.710 "ccf5c61f-d0df-4ab1-a15d-501f98716b7d" 00:06:37.710 ], 00:06:37.710 "product_name": "Malloc disk", 00:06:37.710 "block_size": 512, 00:06:37.710 "num_blocks": 16384, 00:06:37.710 "uuid": "ccf5c61f-d0df-4ab1-a15d-501f98716b7d", 00:06:37.710 "assigned_rate_limits": { 00:06:37.710 "rw_ios_per_sec": 0, 00:06:37.710 "rw_mbytes_per_sec": 0, 00:06:37.710 "r_mbytes_per_sec": 0, 00:06:37.710 "w_mbytes_per_sec": 0 00:06:37.710 }, 00:06:37.710 "claimed": false, 00:06:37.710 "zoned": false, 00:06:37.710 "supported_io_types": { 00:06:37.710 "read": true, 00:06:37.710 "write": true, 00:06:37.710 "unmap": true, 00:06:37.710 "flush": true, 00:06:37.710 "reset": true, 00:06:37.710 "nvme_admin": false, 00:06:37.710 "nvme_io": false, 00:06:37.710 "nvme_io_md": false, 00:06:37.710 "write_zeroes": true, 00:06:37.710 "zcopy": true, 00:06:37.710 "get_zone_info": false, 00:06:37.710 "zone_management": false, 00:06:37.710 "zone_append": false, 00:06:37.710 "compare": false, 00:06:37.710 "compare_and_write": false, 00:06:37.710 "abort": true, 00:06:37.710 "seek_hole": false, 00:06:37.710 "seek_data": false, 00:06:37.710 "copy": true, 00:06:37.710 "nvme_iov_md": false 00:06:37.710 }, 00:06:37.710 "memory_domains": [ 00:06:37.710 { 00:06:37.710 "dma_device_id": "system", 00:06:37.710 "dma_device_type": 1 00:06:37.710 }, 00:06:37.710 { 00:06:37.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.710 "dma_device_type": 2 00:06:37.710 } 00:06:37.710 ], 00:06:37.710 "driver_specific": {} 00:06:37.710 } 00:06:37.710 ]' 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.710 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.968 [2024-11-26 20:34:32.704708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:37.968 [2024-11-26 20:34:32.704765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.968 [2024-11-26 20:34:32.704786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1038030 00:06:37.968 [2024-11-26 20:34:32.704814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.968 [2024-11-26 20:34:32.706378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.969 [2024-11-26 20:34:32.706414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:37.969 Passthru0 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:37.969 { 00:06:37.969 "name": "Malloc2", 00:06:37.969 "aliases": [ 00:06:37.969 "ccf5c61f-d0df-4ab1-a15d-501f98716b7d" 00:06:37.969 ], 00:06:37.969 "product_name": "Malloc disk", 00:06:37.969 "block_size": 512, 00:06:37.969 "num_blocks": 16384, 00:06:37.969 "uuid": "ccf5c61f-d0df-4ab1-a15d-501f98716b7d", 00:06:37.969 "assigned_rate_limits": { 00:06:37.969 "rw_ios_per_sec": 0, 00:06:37.969 "rw_mbytes_per_sec": 0, 00:06:37.969 "r_mbytes_per_sec": 0, 00:06:37.969 "w_mbytes_per_sec": 0 00:06:37.969 }, 00:06:37.969 "claimed": true, 00:06:37.969 "claim_type": "exclusive_write", 00:06:37.969 "zoned": false, 00:06:37.969 "supported_io_types": { 00:06:37.969 "read": true, 00:06:37.969 "write": true, 00:06:37.969 "unmap": true, 00:06:37.969 "flush": true, 00:06:37.969 "reset": true, 00:06:37.969 "nvme_admin": false, 00:06:37.969 "nvme_io": false, 00:06:37.969 "nvme_io_md": false, 00:06:37.969 "write_zeroes": true, 00:06:37.969 "zcopy": true, 00:06:37.969 "get_zone_info": false, 00:06:37.969 "zone_management": false, 00:06:37.969 "zone_append": false, 00:06:37.969 "compare": false, 00:06:37.969 "compare_and_write": false, 00:06:37.969 "abort": true, 00:06:37.969 "seek_hole": false, 00:06:37.969 "seek_data": false, 00:06:37.969 "copy": true, 00:06:37.969 "nvme_iov_md": false 00:06:37.969 }, 00:06:37.969 "memory_domains": [ 00:06:37.969 { 00:06:37.969 "dma_device_id": "system", 00:06:37.969 "dma_device_type": 1 00:06:37.969 }, 00:06:37.969 { 00:06:37.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.969 "dma_device_type": 2 00:06:37.969 } 00:06:37.969 ], 00:06:37.969 "driver_specific": {} 00:06:37.969 }, 00:06:37.969 { 00:06:37.969 "name": "Passthru0", 00:06:37.969 "aliases": [ 00:06:37.969 "7e60a360-3e2c-50ec-8a88-dba7ee49e02a" 00:06:37.969 ], 00:06:37.969 "product_name": "passthru", 00:06:37.969 "block_size": 512, 00:06:37.969 "num_blocks": 16384, 00:06:37.969 "uuid": "7e60a360-3e2c-50ec-8a88-dba7ee49e02a", 00:06:37.969 "assigned_rate_limits": { 00:06:37.969 "rw_ios_per_sec": 0, 00:06:37.969 "rw_mbytes_per_sec": 0, 00:06:37.969 "r_mbytes_per_sec": 0, 00:06:37.969 "w_mbytes_per_sec": 0 00:06:37.969 }, 00:06:37.969 "claimed": false, 00:06:37.969 "zoned": false, 00:06:37.969 "supported_io_types": { 00:06:37.969 "read": true, 00:06:37.969 "write": true, 00:06:37.969 "unmap": true, 00:06:37.969 "flush": true, 00:06:37.969 "reset": true, 00:06:37.969 "nvme_admin": false, 00:06:37.969 "nvme_io": false, 00:06:37.969 "nvme_io_md": false, 00:06:37.969 "write_zeroes": true, 00:06:37.969 "zcopy": true, 00:06:37.969 "get_zone_info": false, 00:06:37.969 "zone_management": false, 00:06:37.969 "zone_append": false, 00:06:37.969 "compare": false, 00:06:37.969 "compare_and_write": false, 00:06:37.969 "abort": true, 00:06:37.969 "seek_hole": false, 00:06:37.969 "seek_data": false, 00:06:37.969 "copy": true, 00:06:37.969 "nvme_iov_md": false 00:06:37.969 }, 00:06:37.969 "memory_domains": [ 00:06:37.969 { 00:06:37.969 "dma_device_id": "system", 00:06:37.969 "dma_device_type": 1 00:06:37.969 }, 00:06:37.969 { 00:06:37.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.969 "dma_device_type": 2 00:06:37.969 } 00:06:37.969 ], 00:06:37.969 "driver_specific": { 00:06:37.969 "passthru": { 00:06:37.969 "name": "Passthru0", 00:06:37.969 "base_bdev_name": "Malloc2" 00:06:37.969 } 00:06:37.969 } 00:06:37.969 } 00:06:37.969 ]' 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.969 00:06:37.969 real 0m0.307s 00:06:37.969 user 0m0.177s 00:06:37.969 sys 0m0.060s 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.969 20:34:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.969 ************************************ 00:06:37.970 END TEST rpc_daemon_integrity 00:06:37.970 ************************************ 00:06:37.970 20:34:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:37.970 20:34:32 rpc -- rpc/rpc.sh@84 -- # killprocess 56885 00:06:37.970 20:34:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 56885 ']' 00:06:37.970 20:34:32 rpc -- common/autotest_common.sh@958 -- # kill -0 56885 00:06:37.970 20:34:32 rpc -- common/autotest_common.sh@959 -- # uname 00:06:37.970 20:34:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.970 20:34:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56885 00:06:38.228 20:34:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.228 20:34:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.228 killing process with pid 56885 00:06:38.228 20:34:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56885' 00:06:38.228 20:34:32 rpc -- common/autotest_common.sh@973 -- # kill 56885 00:06:38.228 20:34:32 rpc -- common/autotest_common.sh@978 -- # wait 56885 00:06:38.486 00:06:38.486 real 0m2.547s 00:06:38.487 user 0m2.985s 00:06:38.487 sys 0m0.923s 00:06:38.487 20:34:33 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.487 20:34:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.487 ************************************ 00:06:38.487 END TEST rpc 00:06:38.487 ************************************ 00:06:38.487 20:34:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:38.487 20:34:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.487 20:34:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.487 20:34:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.487 ************************************ 00:06:38.487 START TEST skip_rpc 00:06:38.487 ************************************ 00:06:38.487 20:34:33 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:38.487 * Looking for test storage... 00:06:38.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:38.487 20:34:33 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.487 20:34:33 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.487 20:34:33 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.745 20:34:33 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.745 20:34:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:38.745 20:34:33 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.745 20:34:33 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.745 --rc genhtml_branch_coverage=1 00:06:38.745 --rc genhtml_function_coverage=1 00:06:38.745 --rc genhtml_legend=1 00:06:38.745 --rc geninfo_all_blocks=1 00:06:38.745 --rc geninfo_unexecuted_blocks=1 00:06:38.745 00:06:38.745 ' 00:06:38.745 20:34:33 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.745 --rc genhtml_branch_coverage=1 00:06:38.745 --rc genhtml_function_coverage=1 00:06:38.745 --rc genhtml_legend=1 00:06:38.745 --rc geninfo_all_blocks=1 00:06:38.745 --rc geninfo_unexecuted_blocks=1 00:06:38.745 00:06:38.745 ' 00:06:38.745 20:34:33 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.745 --rc genhtml_branch_coverage=1 00:06:38.745 --rc genhtml_function_coverage=1 00:06:38.746 --rc genhtml_legend=1 00:06:38.746 --rc geninfo_all_blocks=1 00:06:38.746 --rc geninfo_unexecuted_blocks=1 00:06:38.746 00:06:38.746 ' 00:06:38.746 20:34:33 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.746 --rc genhtml_branch_coverage=1 00:06:38.746 --rc genhtml_function_coverage=1 00:06:38.746 --rc genhtml_legend=1 00:06:38.746 --rc geninfo_all_blocks=1 00:06:38.746 --rc geninfo_unexecuted_blocks=1 00:06:38.746 00:06:38.746 ' 00:06:38.746 20:34:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:38.746 20:34:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:38.746 20:34:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:38.746 20:34:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.746 20:34:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.746 20:34:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.746 ************************************ 00:06:38.746 START TEST skip_rpc 00:06:38.746 ************************************ 00:06:38.746 20:34:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:38.746 20:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57089 00:06:38.746 20:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.746 20:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:38.746 20:34:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:38.746 [2024-11-26 20:34:33.682150] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:38.746 [2024-11-26 20:34:33.682897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57089 ] 00:06:39.004 [2024-11-26 20:34:33.838898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.004 [2024-11-26 20:34:33.896335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.004 [2024-11-26 20:34:33.954906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57089 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57089 ']' 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57089 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57089 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.272 killing process with pid 57089 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57089' 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57089 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57089 00:06:44.272 00:06:44.272 real 0m5.393s 00:06:44.272 user 0m5.034s 00:06:44.272 sys 0m0.284s 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.272 20:34:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.272 ************************************ 00:06:44.272 END TEST skip_rpc 00:06:44.272 ************************************ 00:06:44.272 20:34:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:44.272 20:34:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.272 20:34:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.272 20:34:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.272 ************************************ 00:06:44.272 START TEST skip_rpc_with_json 00:06:44.272 ************************************ 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57170 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57170 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57170 ']' 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.272 20:34:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.272 [2024-11-26 20:34:39.102010] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:44.272 [2024-11-26 20:34:39.102138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57170 ] 00:06:44.272 [2024-11-26 20:34:39.243096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.531 [2024-11-26 20:34:39.299286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.531 [2024-11-26 20:34:39.359408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.108 [2024-11-26 20:34:40.087916] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:45.108 request: 00:06:45.108 { 00:06:45.108 "trtype": "tcp", 00:06:45.108 "method": "nvmf_get_transports", 00:06:45.108 "req_id": 1 00:06:45.108 } 00:06:45.108 Got JSON-RPC error response 00:06:45.108 response: 00:06:45.108 { 00:06:45.108 "code": -19, 00:06:45.108 "message": "No such device" 00:06:45.108 } 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.108 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.367 [2024-11-26 20:34:40.100034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.367 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.367 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:45.367 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.367 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.367 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.367 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.367 { 00:06:45.367 "subsystems": [ 00:06:45.367 { 00:06:45.367 "subsystem": "fsdev", 00:06:45.367 "config": [ 00:06:45.367 { 00:06:45.367 "method": "fsdev_set_opts", 00:06:45.367 "params": { 00:06:45.367 "fsdev_io_pool_size": 65535, 00:06:45.367 "fsdev_io_cache_size": 256 00:06:45.367 } 00:06:45.367 } 00:06:45.367 ] 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "subsystem": "keyring", 00:06:45.367 "config": [] 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "subsystem": "iobuf", 00:06:45.367 "config": [ 00:06:45.367 { 00:06:45.367 "method": "iobuf_set_options", 00:06:45.367 "params": { 00:06:45.367 "small_pool_count": 8192, 00:06:45.367 "large_pool_count": 1024, 00:06:45.367 "small_bufsize": 8192, 00:06:45.367 "large_bufsize": 135168, 00:06:45.367 "enable_numa": false 00:06:45.367 } 00:06:45.367 } 00:06:45.367 ] 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "subsystem": "sock", 00:06:45.367 "config": [ 00:06:45.367 { 00:06:45.367 "method": "sock_set_default_impl", 00:06:45.367 "params": { 00:06:45.367 "impl_name": "uring" 00:06:45.367 } 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "method": "sock_impl_set_options", 00:06:45.367 "params": { 00:06:45.367 "impl_name": "ssl", 00:06:45.367 "recv_buf_size": 4096, 00:06:45.367 "send_buf_size": 4096, 00:06:45.367 "enable_recv_pipe": true, 00:06:45.367 "enable_quickack": false, 00:06:45.367 "enable_placement_id": 0, 00:06:45.367 "enable_zerocopy_send_server": true, 00:06:45.367 "enable_zerocopy_send_client": false, 00:06:45.367 "zerocopy_threshold": 0, 00:06:45.367 "tls_version": 0, 00:06:45.367 "enable_ktls": false 00:06:45.367 } 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "method": "sock_impl_set_options", 00:06:45.367 "params": { 00:06:45.367 "impl_name": "posix", 00:06:45.367 "recv_buf_size": 2097152, 00:06:45.367 "send_buf_size": 2097152, 00:06:45.367 "enable_recv_pipe": true, 00:06:45.367 "enable_quickack": false, 00:06:45.367 "enable_placement_id": 0, 00:06:45.367 "enable_zerocopy_send_server": true, 00:06:45.367 "enable_zerocopy_send_client": false, 00:06:45.367 "zerocopy_threshold": 0, 00:06:45.367 "tls_version": 0, 00:06:45.367 "enable_ktls": false 00:06:45.367 } 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "method": "sock_impl_set_options", 00:06:45.367 "params": { 00:06:45.367 "impl_name": "uring", 00:06:45.367 "recv_buf_size": 2097152, 00:06:45.367 "send_buf_size": 2097152, 00:06:45.367 "enable_recv_pipe": true, 00:06:45.367 "enable_quickack": false, 00:06:45.367 "enable_placement_id": 0, 00:06:45.367 "enable_zerocopy_send_server": false, 00:06:45.367 "enable_zerocopy_send_client": false, 00:06:45.367 "zerocopy_threshold": 0, 00:06:45.367 "tls_version": 0, 00:06:45.367 "enable_ktls": false 00:06:45.367 } 00:06:45.367 } 00:06:45.367 ] 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "subsystem": "vmd", 00:06:45.367 "config": [] 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "subsystem": "accel", 00:06:45.367 "config": [ 00:06:45.367 { 00:06:45.367 "method": "accel_set_options", 00:06:45.367 "params": { 00:06:45.367 "small_cache_size": 128, 00:06:45.367 "large_cache_size": 16, 00:06:45.367 "task_count": 2048, 00:06:45.367 "sequence_count": 2048, 00:06:45.367 "buf_count": 2048 00:06:45.367 } 00:06:45.367 } 00:06:45.367 ] 00:06:45.367 }, 00:06:45.367 { 00:06:45.367 "subsystem": "bdev", 00:06:45.367 "config": [ 00:06:45.367 { 00:06:45.368 "method": "bdev_set_options", 00:06:45.368 "params": { 00:06:45.368 "bdev_io_pool_size": 65535, 00:06:45.368 "bdev_io_cache_size": 256, 00:06:45.368 "bdev_auto_examine": true, 00:06:45.368 "iobuf_small_cache_size": 128, 00:06:45.368 "iobuf_large_cache_size": 16 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "bdev_raid_set_options", 00:06:45.368 "params": { 00:06:45.368 "process_window_size_kb": 1024, 00:06:45.368 "process_max_bandwidth_mb_sec": 0 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "bdev_iscsi_set_options", 00:06:45.368 "params": { 00:06:45.368 "timeout_sec": 30 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "bdev_nvme_set_options", 00:06:45.368 "params": { 00:06:45.368 "action_on_timeout": "none", 00:06:45.368 "timeout_us": 0, 00:06:45.368 "timeout_admin_us": 0, 00:06:45.368 "keep_alive_timeout_ms": 10000, 00:06:45.368 "arbitration_burst": 0, 00:06:45.368 "low_priority_weight": 0, 00:06:45.368 "medium_priority_weight": 0, 00:06:45.368 "high_priority_weight": 0, 00:06:45.368 "nvme_adminq_poll_period_us": 10000, 00:06:45.368 "nvme_ioq_poll_period_us": 0, 00:06:45.368 "io_queue_requests": 0, 00:06:45.368 "delay_cmd_submit": true, 00:06:45.368 "transport_retry_count": 4, 00:06:45.368 "bdev_retry_count": 3, 00:06:45.368 "transport_ack_timeout": 0, 00:06:45.368 "ctrlr_loss_timeout_sec": 0, 00:06:45.368 "reconnect_delay_sec": 0, 00:06:45.368 "fast_io_fail_timeout_sec": 0, 00:06:45.368 "disable_auto_failback": false, 00:06:45.368 "generate_uuids": false, 00:06:45.368 "transport_tos": 0, 00:06:45.368 "nvme_error_stat": false, 00:06:45.368 "rdma_srq_size": 0, 00:06:45.368 "io_path_stat": false, 00:06:45.368 "allow_accel_sequence": false, 00:06:45.368 "rdma_max_cq_size": 0, 00:06:45.368 "rdma_cm_event_timeout_ms": 0, 00:06:45.368 "dhchap_digests": [ 00:06:45.368 "sha256", 00:06:45.368 "sha384", 00:06:45.368 "sha512" 00:06:45.368 ], 00:06:45.368 "dhchap_dhgroups": [ 00:06:45.368 "null", 00:06:45.368 "ffdhe2048", 00:06:45.368 "ffdhe3072", 00:06:45.368 "ffdhe4096", 00:06:45.368 "ffdhe6144", 00:06:45.368 "ffdhe8192" 00:06:45.368 ] 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "bdev_nvme_set_hotplug", 00:06:45.368 "params": { 00:06:45.368 "period_us": 100000, 00:06:45.368 "enable": false 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "bdev_wait_for_examine" 00:06:45.368 } 00:06:45.368 ] 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "scsi", 00:06:45.368 "config": null 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "scheduler", 00:06:45.368 "config": [ 00:06:45.368 { 00:06:45.368 "method": "framework_set_scheduler", 00:06:45.368 "params": { 00:06:45.368 "name": "static" 00:06:45.368 } 00:06:45.368 } 00:06:45.368 ] 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "vhost_scsi", 00:06:45.368 "config": [] 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "vhost_blk", 00:06:45.368 "config": [] 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "ublk", 00:06:45.368 "config": [] 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "nbd", 00:06:45.368 "config": [] 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "nvmf", 00:06:45.368 "config": [ 00:06:45.368 { 00:06:45.368 "method": "nvmf_set_config", 00:06:45.368 "params": { 00:06:45.368 "discovery_filter": "match_any", 00:06:45.368 "admin_cmd_passthru": { 00:06:45.368 "identify_ctrlr": false 00:06:45.368 }, 00:06:45.368 "dhchap_digests": [ 00:06:45.368 "sha256", 00:06:45.368 "sha384", 00:06:45.368 "sha512" 00:06:45.368 ], 00:06:45.368 "dhchap_dhgroups": [ 00:06:45.368 "null", 00:06:45.368 "ffdhe2048", 00:06:45.368 "ffdhe3072", 00:06:45.368 "ffdhe4096", 00:06:45.368 "ffdhe6144", 00:06:45.368 "ffdhe8192" 00:06:45.368 ] 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "nvmf_set_max_subsystems", 00:06:45.368 "params": { 00:06:45.368 "max_subsystems": 1024 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "nvmf_set_crdt", 00:06:45.368 "params": { 00:06:45.368 "crdt1": 0, 00:06:45.368 "crdt2": 0, 00:06:45.368 "crdt3": 0 00:06:45.368 } 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "method": "nvmf_create_transport", 00:06:45.368 "params": { 00:06:45.368 "trtype": "TCP", 00:06:45.368 "max_queue_depth": 128, 00:06:45.368 "max_io_qpairs_per_ctrlr": 127, 00:06:45.368 "in_capsule_data_size": 4096, 00:06:45.368 "max_io_size": 131072, 00:06:45.368 "io_unit_size": 131072, 00:06:45.368 "max_aq_depth": 128, 00:06:45.368 "num_shared_buffers": 511, 00:06:45.368 "buf_cache_size": 4294967295, 00:06:45.368 "dif_insert_or_strip": false, 00:06:45.368 "zcopy": false, 00:06:45.368 "c2h_success": true, 00:06:45.368 "sock_priority": 0, 00:06:45.368 "abort_timeout_sec": 1, 00:06:45.368 "ack_timeout": 0, 00:06:45.368 "data_wr_pool_size": 0 00:06:45.368 } 00:06:45.368 } 00:06:45.368 ] 00:06:45.368 }, 00:06:45.368 { 00:06:45.368 "subsystem": "iscsi", 00:06:45.368 "config": [ 00:06:45.368 { 00:06:45.368 "method": "iscsi_set_options", 00:06:45.368 "params": { 00:06:45.368 "node_base": "iqn.2016-06.io.spdk", 00:06:45.368 "max_sessions": 128, 00:06:45.368 "max_connections_per_session": 2, 00:06:45.368 "max_queue_depth": 64, 00:06:45.368 "default_time2wait": 2, 00:06:45.368 "default_time2retain": 20, 00:06:45.368 "first_burst_length": 8192, 00:06:45.368 "immediate_data": true, 00:06:45.368 "allow_duplicated_isid": false, 00:06:45.368 "error_recovery_level": 0, 00:06:45.368 "nop_timeout": 60, 00:06:45.368 "nop_in_interval": 30, 00:06:45.368 "disable_chap": false, 00:06:45.368 "require_chap": false, 00:06:45.368 "mutual_chap": false, 00:06:45.368 "chap_group": 0, 00:06:45.368 "max_large_datain_per_connection": 64, 00:06:45.368 "max_r2t_per_connection": 4, 00:06:45.368 "pdu_pool_size": 36864, 00:06:45.368 "immediate_data_pool_size": 16384, 00:06:45.368 "data_out_pool_size": 2048 00:06:45.368 } 00:06:45.368 } 00:06:45.368 ] 00:06:45.368 } 00:06:45.368 ] 00:06:45.368 } 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57170 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57170 ']' 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57170 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57170 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.368 killing process with pid 57170 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57170' 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57170 00:06:45.368 20:34:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57170 00:06:45.936 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57198 00:06:45.936 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.936 20:34:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57198 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57198 ']' 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57198 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57198 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.226 killing process with pid 57198 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57198' 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57198 00:06:51.226 20:34:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57198 00:06:51.226 20:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:51.226 20:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:51.226 00:06:51.226 real 0m7.010s 00:06:51.226 user 0m6.781s 00:06:51.226 sys 0m0.647s 00:06:51.226 20:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.226 20:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.226 ************************************ 00:06:51.226 END TEST skip_rpc_with_json 00:06:51.226 ************************************ 00:06:51.226 20:34:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:51.226 20:34:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.226 20:34:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.226 20:34:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.226 ************************************ 00:06:51.226 START TEST skip_rpc_with_delay 00:06:51.226 ************************************ 00:06:51.226 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:51.226 20:34:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:51.227 [2024-11-26 20:34:46.185126] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.227 00:06:51.227 real 0m0.098s 00:06:51.227 user 0m0.055s 00:06:51.227 sys 0m0.042s 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.227 ************************************ 00:06:51.227 END TEST skip_rpc_with_delay 00:06:51.227 20:34:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:51.227 ************************************ 00:06:51.485 20:34:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:51.485 20:34:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:51.485 20:34:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:51.485 20:34:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.485 20:34:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.485 20:34:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.485 ************************************ 00:06:51.485 START TEST exit_on_failed_rpc_init 00:06:51.485 ************************************ 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57307 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57307 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57307 ']' 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.485 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.485 [2024-11-26 20:34:46.323817] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:51.485 [2024-11-26 20:34:46.323921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57307 ] 00:06:51.485 [2024-11-26 20:34:46.472729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.743 [2024-11-26 20:34:46.532751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.743 [2024-11-26 20:34:46.594743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:52.002 20:34:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.002 [2024-11-26 20:34:46.857441] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:52.002 [2024-11-26 20:34:46.857561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57318 ] 00:06:52.261 [2024-11-26 20:34:47.010487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.261 [2024-11-26 20:34:47.069099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.261 [2024-11-26 20:34:47.069217] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:52.261 [2024-11-26 20:34:47.069231] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:52.261 [2024-11-26 20:34:47.069241] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57307 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57307 ']' 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57307 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57307 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57307' 00:06:52.261 killing process with pid 57307 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57307 00:06:52.261 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57307 00:06:52.828 00:06:52.828 real 0m1.241s 00:06:52.828 user 0m1.298s 00:06:52.828 sys 0m0.391s 00:06:52.828 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.828 20:34:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 ************************************ 00:06:52.828 END TEST exit_on_failed_rpc_init 00:06:52.828 ************************************ 00:06:52.828 20:34:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:52.828 00:06:52.828 real 0m14.201s 00:06:52.828 user 0m13.367s 00:06:52.828 sys 0m1.631s 00:06:52.828 20:34:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.828 20:34:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 ************************************ 00:06:52.828 END TEST skip_rpc 00:06:52.828 ************************************ 00:06:52.828 20:34:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:52.828 20:34:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.828 20:34:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.828 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 ************************************ 00:06:52.828 START TEST rpc_client 00:06:52.828 ************************************ 00:06:52.828 20:34:47 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:52.828 * Looking for test storage... 00:06:52.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:52.828 20:34:47 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.828 20:34:47 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.828 20:34:47 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.828 20:34:47 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:52.828 20:34:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.087 20:34:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:53.087 20:34:47 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.087 20:34:47 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.087 --rc genhtml_branch_coverage=1 00:06:53.087 --rc genhtml_function_coverage=1 00:06:53.087 --rc genhtml_legend=1 00:06:53.087 --rc geninfo_all_blocks=1 00:06:53.087 --rc geninfo_unexecuted_blocks=1 00:06:53.087 00:06:53.087 ' 00:06:53.087 20:34:47 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.087 --rc genhtml_branch_coverage=1 00:06:53.087 --rc genhtml_function_coverage=1 00:06:53.087 --rc genhtml_legend=1 00:06:53.087 --rc geninfo_all_blocks=1 00:06:53.087 --rc geninfo_unexecuted_blocks=1 00:06:53.087 00:06:53.087 ' 00:06:53.087 20:34:47 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.087 --rc genhtml_branch_coverage=1 00:06:53.087 --rc genhtml_function_coverage=1 00:06:53.087 --rc genhtml_legend=1 00:06:53.087 --rc geninfo_all_blocks=1 00:06:53.087 --rc geninfo_unexecuted_blocks=1 00:06:53.087 00:06:53.087 ' 00:06:53.087 20:34:47 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.087 --rc genhtml_branch_coverage=1 00:06:53.087 --rc genhtml_function_coverage=1 00:06:53.087 --rc genhtml_legend=1 00:06:53.087 --rc geninfo_all_blocks=1 00:06:53.087 --rc geninfo_unexecuted_blocks=1 00:06:53.087 00:06:53.087 ' 00:06:53.087 20:34:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:53.087 OK 00:06:53.087 20:34:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:53.087 00:06:53.087 real 0m0.238s 00:06:53.087 user 0m0.141s 00:06:53.087 sys 0m0.112s 00:06:53.087 20:34:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.087 20:34:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:53.087 ************************************ 00:06:53.087 END TEST rpc_client 00:06:53.087 ************************************ 00:06:53.087 20:34:47 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.087 20:34:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.087 20:34:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.087 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:06:53.087 ************************************ 00:06:53.087 START TEST json_config 00:06:53.087 ************************************ 00:06:53.087 20:34:47 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.087 20:34:47 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.087 20:34:47 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.087 20:34:47 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.087 20:34:48 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.087 20:34:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.087 20:34:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.087 20:34:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.087 20:34:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.087 20:34:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.087 20:34:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.087 20:34:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.087 20:34:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.087 20:34:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.087 20:34:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.087 20:34:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.087 20:34:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:53.087 20:34:48 json_config -- scripts/common.sh@345 -- # : 1 00:06:53.088 20:34:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.088 20:34:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.088 20:34:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:53.088 20:34:48 json_config -- scripts/common.sh@353 -- # local d=1 00:06:53.088 20:34:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.088 20:34:48 json_config -- scripts/common.sh@355 -- # echo 1 00:06:53.088 20:34:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.088 20:34:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:53.088 20:34:48 json_config -- scripts/common.sh@353 -- # local d=2 00:06:53.088 20:34:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.088 20:34:48 json_config -- scripts/common.sh@355 -- # echo 2 00:06:53.088 20:34:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.088 20:34:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.088 20:34:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.088 20:34:48 json_config -- scripts/common.sh@368 -- # return 0 00:06:53.088 20:34:48 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.088 20:34:48 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.088 --rc genhtml_branch_coverage=1 00:06:53.088 --rc genhtml_function_coverage=1 00:06:53.088 --rc genhtml_legend=1 00:06:53.088 --rc geninfo_all_blocks=1 00:06:53.088 --rc geninfo_unexecuted_blocks=1 00:06:53.088 00:06:53.088 ' 00:06:53.088 20:34:48 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.088 --rc genhtml_branch_coverage=1 00:06:53.088 --rc genhtml_function_coverage=1 00:06:53.088 --rc genhtml_legend=1 00:06:53.088 --rc geninfo_all_blocks=1 00:06:53.088 --rc geninfo_unexecuted_blocks=1 00:06:53.088 00:06:53.088 ' 00:06:53.088 20:34:48 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.088 --rc genhtml_branch_coverage=1 00:06:53.088 --rc genhtml_function_coverage=1 00:06:53.088 --rc genhtml_legend=1 00:06:53.088 --rc geninfo_all_blocks=1 00:06:53.088 --rc geninfo_unexecuted_blocks=1 00:06:53.088 00:06:53.088 ' 00:06:53.088 20:34:48 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.088 --rc genhtml_branch_coverage=1 00:06:53.088 --rc genhtml_function_coverage=1 00:06:53.088 --rc genhtml_legend=1 00:06:53.088 --rc geninfo_all_blocks=1 00:06:53.088 --rc geninfo_unexecuted_blocks=1 00:06:53.088 00:06:53.088 ' 00:06:53.088 20:34:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.088 20:34:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.088 20:34:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.348 20:34:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.348 20:34:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.348 20:34:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.348 20:34:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.348 20:34:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.348 20:34:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.348 20:34:48 json_config -- paths/export.sh@5 -- # export PATH 00:06:53.348 20:34:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@51 -- # : 0 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.348 20:34:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.348 INFO: JSON configuration test init 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.348 20:34:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:53.348 20:34:48 json_config -- json_config/common.sh@9 -- # local app=target 00:06:53.348 20:34:48 json_config -- json_config/common.sh@10 -- # shift 00:06:53.348 20:34:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.348 20:34:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.348 20:34:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.348 20:34:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.348 20:34:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.348 20:34:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57457 00:06:53.348 Waiting for target to run... 00:06:53.348 20:34:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.348 20:34:48 json_config -- json_config/common.sh@25 -- # waitforlisten 57457 /var/tmp/spdk_tgt.sock 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@835 -- # '[' -z 57457 ']' 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.348 20:34:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.349 20:34:48 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:53.349 [2024-11-26 20:34:48.168285] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:53.349 [2024-11-26 20:34:48.168404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57457 ] 00:06:53.608 [2024-11-26 20:34:48.586001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.867 [2024-11-26 20:34:48.633051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.433 20:34:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.433 20:34:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:54.433 00:06:54.433 20:34:49 json_config -- json_config/common.sh@26 -- # echo '' 00:06:54.433 20:34:49 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:54.433 20:34:49 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:54.433 20:34:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.433 20:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.433 20:34:49 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:54.433 20:34:49 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:54.434 20:34:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.434 20:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.434 20:34:49 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:54.434 20:34:49 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:54.434 20:34:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:54.693 [2024-11-26 20:34:49.439036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:54.693 20:34:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.693 20:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:54.693 20:34:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:54.693 20:34:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@54 -- # sort 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:55.259 20:34:49 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:55.259 20:34:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.259 20:34:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:55.259 20:34:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.259 20:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:55.259 20:34:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:55.260 20:34:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:55.518 MallocForNvmf0 00:06:55.518 20:34:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:55.518 20:34:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:55.777 MallocForNvmf1 00:06:55.777 20:34:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:55.777 20:34:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:56.035 [2024-11-26 20:34:50.783181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.035 20:34:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.035 20:34:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.294 20:34:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:56.294 20:34:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:56.294 20:34:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:56.294 20:34:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:56.552 20:34:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:56.552 20:34:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:56.811 [2024-11-26 20:34:51.739788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:56.811 20:34:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:56.811 20:34:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.811 20:34:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.070 20:34:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:57.070 20:34:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.070 20:34:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.070 20:34:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:57.070 20:34:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:57.070 20:34:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:57.328 MallocBdevForConfigChangeCheck 00:06:57.328 20:34:52 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:57.328 20:34:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.328 20:34:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.328 20:34:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:57.328 20:34:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:57.588 INFO: shutting down applications... 00:06:57.588 20:34:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:57.588 20:34:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:57.588 20:34:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:57.588 20:34:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:57.588 20:34:52 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:58.157 Calling clear_iscsi_subsystem 00:06:58.157 Calling clear_nvmf_subsystem 00:06:58.157 Calling clear_nbd_subsystem 00:06:58.157 Calling clear_ublk_subsystem 00:06:58.157 Calling clear_vhost_blk_subsystem 00:06:58.157 Calling clear_vhost_scsi_subsystem 00:06:58.157 Calling clear_bdev_subsystem 00:06:58.157 20:34:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:58.157 20:34:52 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:58.157 20:34:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:58.157 20:34:52 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.157 20:34:52 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:58.157 20:34:52 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:58.416 20:34:53 json_config -- json_config/json_config.sh@352 -- # break 00:06:58.416 20:34:53 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:58.416 20:34:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:58.416 20:34:53 json_config -- json_config/common.sh@31 -- # local app=target 00:06:58.416 20:34:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:58.416 20:34:53 json_config -- json_config/common.sh@35 -- # [[ -n 57457 ]] 00:06:58.416 20:34:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57457 00:06:58.416 20:34:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:58.416 20:34:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.416 20:34:53 json_config -- json_config/common.sh@41 -- # kill -0 57457 00:06:58.416 20:34:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:58.983 20:34:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:58.983 20:34:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.983 20:34:53 json_config -- json_config/common.sh@41 -- # kill -0 57457 00:06:58.983 20:34:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:58.983 20:34:53 json_config -- json_config/common.sh@43 -- # break 00:06:58.983 20:34:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:58.983 20:34:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:58.983 SPDK target shutdown done 00:06:58.983 INFO: relaunching applications... 00:06:58.983 20:34:53 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:58.983 20:34:53 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:58.983 20:34:53 json_config -- json_config/common.sh@9 -- # local app=target 00:06:58.983 20:34:53 json_config -- json_config/common.sh@10 -- # shift 00:06:58.983 20:34:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:58.983 20:34:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:58.983 20:34:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:58.983 20:34:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:58.983 20:34:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:58.983 20:34:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57653 00:06:58.983 Waiting for target to run... 00:06:58.983 20:34:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:58.983 20:34:53 json_config -- json_config/common.sh@25 -- # waitforlisten 57653 /var/tmp/spdk_tgt.sock 00:06:58.983 20:34:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 57653 ']' 00:06:58.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:58.983 20:34:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:58.983 20:34:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:58.983 20:34:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.983 20:34:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:58.983 20:34:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.983 20:34:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.983 [2024-11-26 20:34:53.921551] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:58.983 [2024-11-26 20:34:53.921659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57653 ] 00:06:59.349 [2024-11-26 20:34:54.311559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.608 [2024-11-26 20:34:54.362529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.609 [2024-11-26 20:34:54.499877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.868 [2024-11-26 20:34:54.715788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.868 [2024-11-26 20:34:54.747760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:00.126 00:07:00.126 INFO: Checking if target configuration is the same... 00:07:00.126 20:34:54 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.126 20:34:54 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:00.126 20:34:54 json_config -- json_config/common.sh@26 -- # echo '' 00:07:00.126 20:34:54 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:00.126 20:34:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:00.126 20:34:54 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:00.126 20:34:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:00.126 20:34:54 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.126 + '[' 2 -ne 2 ']' 00:07:00.126 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:00.126 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:00.126 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:00.126 +++ basename /dev/fd/62 00:07:00.126 ++ mktemp /tmp/62.XXX 00:07:00.126 + tmp_file_1=/tmp/62.WVa 00:07:00.126 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.126 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:00.126 + tmp_file_2=/tmp/spdk_tgt_config.json.Lcj 00:07:00.126 + ret=0 00:07:00.126 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:00.385 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:00.644 + diff -u /tmp/62.WVa /tmp/spdk_tgt_config.json.Lcj 00:07:00.644 INFO: JSON config files are the same 00:07:00.644 + echo 'INFO: JSON config files are the same' 00:07:00.644 + rm /tmp/62.WVa /tmp/spdk_tgt_config.json.Lcj 00:07:00.644 + exit 0 00:07:00.644 INFO: changing configuration and checking if this can be detected... 00:07:00.644 20:34:55 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:00.644 20:34:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:00.644 20:34:55 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:00.644 20:34:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:00.902 20:34:55 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.902 20:34:55 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:00.902 20:34:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:00.902 + '[' 2 -ne 2 ']' 00:07:00.902 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:00.902 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:00.902 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:00.902 +++ basename /dev/fd/62 00:07:00.902 ++ mktemp /tmp/62.XXX 00:07:00.902 + tmp_file_1=/tmp/62.OCS 00:07:00.902 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.902 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:00.902 + tmp_file_2=/tmp/spdk_tgt_config.json.OIo 00:07:00.902 + ret=0 00:07:00.902 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:01.468 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:01.468 + diff -u /tmp/62.OCS /tmp/spdk_tgt_config.json.OIo 00:07:01.468 + ret=1 00:07:01.468 + echo '=== Start of file: /tmp/62.OCS ===' 00:07:01.468 + cat /tmp/62.OCS 00:07:01.468 + echo '=== End of file: /tmp/62.OCS ===' 00:07:01.468 + echo '' 00:07:01.468 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OIo ===' 00:07:01.468 + cat /tmp/spdk_tgt_config.json.OIo 00:07:01.468 + echo '=== End of file: /tmp/spdk_tgt_config.json.OIo ===' 00:07:01.468 + echo '' 00:07:01.468 + rm /tmp/62.OCS /tmp/spdk_tgt_config.json.OIo 00:07:01.468 + exit 1 00:07:01.468 INFO: configuration change detected. 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@324 -- # [[ -n 57653 ]] 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.468 20:34:56 json_config -- json_config/json_config.sh@330 -- # killprocess 57653 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@954 -- # '[' -z 57653 ']' 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@958 -- # kill -0 57653 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@959 -- # uname 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57653 00:07:01.468 killing process with pid 57653 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57653' 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@973 -- # kill 57653 00:07:01.468 20:34:56 json_config -- common/autotest_common.sh@978 -- # wait 57653 00:07:01.727 20:34:56 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.727 20:34:56 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:01.727 20:34:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.727 20:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.727 INFO: Success 00:07:01.727 20:34:56 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:01.727 20:34:56 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:01.727 ************************************ 00:07:01.727 END TEST json_config 00:07:01.727 ************************************ 00:07:01.727 00:07:01.727 real 0m8.704s 00:07:01.727 user 0m12.321s 00:07:01.727 sys 0m2.027s 00:07:01.727 20:34:56 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.727 20:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.727 20:34:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:01.727 20:34:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.727 20:34:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.727 20:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:01.727 ************************************ 00:07:01.727 START TEST json_config_extra_key 00:07:01.727 ************************************ 00:07:01.727 20:34:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.987 --rc genhtml_branch_coverage=1 00:07:01.987 --rc genhtml_function_coverage=1 00:07:01.987 --rc genhtml_legend=1 00:07:01.987 --rc geninfo_all_blocks=1 00:07:01.987 --rc geninfo_unexecuted_blocks=1 00:07:01.987 00:07:01.987 ' 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.987 --rc genhtml_branch_coverage=1 00:07:01.987 --rc genhtml_function_coverage=1 00:07:01.987 --rc genhtml_legend=1 00:07:01.987 --rc geninfo_all_blocks=1 00:07:01.987 --rc geninfo_unexecuted_blocks=1 00:07:01.987 00:07:01.987 ' 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.987 --rc genhtml_branch_coverage=1 00:07:01.987 --rc genhtml_function_coverage=1 00:07:01.987 --rc genhtml_legend=1 00:07:01.987 --rc geninfo_all_blocks=1 00:07:01.987 --rc geninfo_unexecuted_blocks=1 00:07:01.987 00:07:01.987 ' 00:07:01.987 20:34:56 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.987 --rc genhtml_branch_coverage=1 00:07:01.987 --rc genhtml_function_coverage=1 00:07:01.987 --rc genhtml_legend=1 00:07:01.987 --rc geninfo_all_blocks=1 00:07:01.987 --rc geninfo_unexecuted_blocks=1 00:07:01.987 00:07:01.987 ' 00:07:01.987 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.987 20:34:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.987 20:34:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.987 20:34:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.987 20:34:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.987 20:34:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:01.987 20:34:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.987 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.987 20:34:56 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.987 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:01.988 INFO: launching applications... 00:07:01.988 20:34:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57807 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:01.988 Waiting for target to run... 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57807 /var/tmp/spdk_tgt.sock 00:07:01.988 20:34:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:01.988 20:34:56 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57807 ']' 00:07:01.988 20:34:56 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:01.988 20:34:56 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:01.988 20:34:56 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:01.988 20:34:56 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.988 20:34:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:01.988 [2024-11-26 20:34:56.954491] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:01.988 [2024-11-26 20:34:56.954872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57807 ] 00:07:02.589 [2024-11-26 20:34:57.351576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.589 [2024-11-26 20:34:57.398716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.589 [2024-11-26 20:34:57.430444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.157 20:34:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.157 20:34:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:03.157 00:07:03.157 INFO: shutting down applications... 00:07:03.157 20:34:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:03.157 20:34:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57807 ]] 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57807 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57807 00:07:03.157 20:34:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:03.414 20:34:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:03.414 20:34:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.414 20:34:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57807 00:07:03.414 20:34:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:03.414 20:34:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:03.414 20:34:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:03.414 SPDK target shutdown done 00:07:03.414 Success 00:07:03.414 20:34:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:03.414 20:34:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:03.414 00:07:03.414 real 0m1.717s 00:07:03.414 user 0m1.492s 00:07:03.414 sys 0m0.446s 00:07:03.414 20:34:58 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.414 20:34:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:03.414 ************************************ 00:07:03.414 END TEST json_config_extra_key 00:07:03.414 ************************************ 00:07:03.672 20:34:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:03.672 20:34:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.672 20:34:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.672 20:34:58 -- common/autotest_common.sh@10 -- # set +x 00:07:03.672 ************************************ 00:07:03.672 START TEST alias_rpc 00:07:03.672 ************************************ 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:03.672 * Looking for test storage... 00:07:03.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.672 20:34:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.672 --rc genhtml_branch_coverage=1 00:07:03.672 --rc genhtml_function_coverage=1 00:07:03.672 --rc genhtml_legend=1 00:07:03.672 --rc geninfo_all_blocks=1 00:07:03.672 --rc geninfo_unexecuted_blocks=1 00:07:03.672 00:07:03.672 ' 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.672 --rc genhtml_branch_coverage=1 00:07:03.672 --rc genhtml_function_coverage=1 00:07:03.672 --rc genhtml_legend=1 00:07:03.672 --rc geninfo_all_blocks=1 00:07:03.672 --rc geninfo_unexecuted_blocks=1 00:07:03.672 00:07:03.672 ' 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.672 --rc genhtml_branch_coverage=1 00:07:03.672 --rc genhtml_function_coverage=1 00:07:03.672 --rc genhtml_legend=1 00:07:03.672 --rc geninfo_all_blocks=1 00:07:03.672 --rc geninfo_unexecuted_blocks=1 00:07:03.672 00:07:03.672 ' 00:07:03.672 20:34:58 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.672 --rc genhtml_branch_coverage=1 00:07:03.672 --rc genhtml_function_coverage=1 00:07:03.672 --rc genhtml_legend=1 00:07:03.672 --rc geninfo_all_blocks=1 00:07:03.672 --rc geninfo_unexecuted_blocks=1 00:07:03.672 00:07:03.672 ' 00:07:03.672 20:34:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.672 20:34:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.672 20:34:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57879 00:07:03.931 20:34:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57879 00:07:03.931 20:34:58 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57879 ']' 00:07:03.931 20:34:58 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.931 20:34:58 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.931 20:34:58 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.931 20:34:58 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.931 20:34:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.931 [2024-11-26 20:34:58.754548] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:03.931 [2024-11-26 20:34:58.754694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57879 ] 00:07:03.931 [2024-11-26 20:34:58.910522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.202 [2024-11-26 20:34:58.985650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.202 [2024-11-26 20:34:59.056707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.459 20:34:59 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.459 20:34:59 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:04.459 20:34:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:04.717 20:34:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57879 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57879 ']' 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57879 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57879 00:07:04.717 killing process with pid 57879 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57879' 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 57879 00:07:04.717 20:34:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 57879 00:07:05.324 ************************************ 00:07:05.324 END TEST alias_rpc 00:07:05.324 ************************************ 00:07:05.324 00:07:05.324 real 0m1.522s 00:07:05.324 user 0m1.644s 00:07:05.324 sys 0m0.480s 00:07:05.324 20:34:59 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.324 20:34:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.324 20:35:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:05.324 20:35:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:05.324 20:35:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.324 20:35:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.324 20:35:00 -- common/autotest_common.sh@10 -- # set +x 00:07:05.324 ************************************ 00:07:05.324 START TEST spdkcli_tcp 00:07:05.324 ************************************ 00:07:05.324 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:05.324 * Looking for test storage... 00:07:05.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:05.324 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.324 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.324 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.324 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.324 20:35:00 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.324 20:35:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.324 20:35:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.324 20:35:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.324 20:35:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.324 20:35:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.324 20:35:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.325 20:35:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.325 --rc genhtml_branch_coverage=1 00:07:05.325 --rc genhtml_function_coverage=1 00:07:05.325 --rc genhtml_legend=1 00:07:05.325 --rc geninfo_all_blocks=1 00:07:05.325 --rc geninfo_unexecuted_blocks=1 00:07:05.325 00:07:05.325 ' 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.325 --rc genhtml_branch_coverage=1 00:07:05.325 --rc genhtml_function_coverage=1 00:07:05.325 --rc genhtml_legend=1 00:07:05.325 --rc geninfo_all_blocks=1 00:07:05.325 --rc geninfo_unexecuted_blocks=1 00:07:05.325 00:07:05.325 ' 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.325 --rc genhtml_branch_coverage=1 00:07:05.325 --rc genhtml_function_coverage=1 00:07:05.325 --rc genhtml_legend=1 00:07:05.325 --rc geninfo_all_blocks=1 00:07:05.325 --rc geninfo_unexecuted_blocks=1 00:07:05.325 00:07:05.325 ' 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.325 --rc genhtml_branch_coverage=1 00:07:05.325 --rc genhtml_function_coverage=1 00:07:05.325 --rc genhtml_legend=1 00:07:05.325 --rc geninfo_all_blocks=1 00:07:05.325 --rc geninfo_unexecuted_blocks=1 00:07:05.325 00:07:05.325 ' 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57956 00:07:05.325 20:35:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57956 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57956 ']' 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.325 20:35:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.325 [2024-11-26 20:35:00.302267] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:05.325 [2024-11-26 20:35:00.302631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57956 ] 00:07:05.582 [2024-11-26 20:35:00.467777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.582 [2024-11-26 20:35:00.533959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.582 [2024-11-26 20:35:00.533973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.840 [2024-11-26 20:35:00.602523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.403 20:35:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.403 20:35:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:06.403 20:35:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:06.403 20:35:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57973 00:07:06.403 20:35:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:06.661 [ 00:07:06.661 "bdev_malloc_delete", 00:07:06.661 "bdev_malloc_create", 00:07:06.661 "bdev_null_resize", 00:07:06.661 "bdev_null_delete", 00:07:06.661 "bdev_null_create", 00:07:06.661 "bdev_nvme_cuse_unregister", 00:07:06.661 "bdev_nvme_cuse_register", 00:07:06.661 "bdev_opal_new_user", 00:07:06.661 "bdev_opal_set_lock_state", 00:07:06.661 "bdev_opal_delete", 00:07:06.661 "bdev_opal_get_info", 00:07:06.661 "bdev_opal_create", 00:07:06.661 "bdev_nvme_opal_revert", 00:07:06.661 "bdev_nvme_opal_init", 00:07:06.661 "bdev_nvme_send_cmd", 00:07:06.661 "bdev_nvme_set_keys", 00:07:06.661 "bdev_nvme_get_path_iostat", 00:07:06.661 "bdev_nvme_get_mdns_discovery_info", 00:07:06.661 "bdev_nvme_stop_mdns_discovery", 00:07:06.661 "bdev_nvme_start_mdns_discovery", 00:07:06.661 "bdev_nvme_set_multipath_policy", 00:07:06.661 "bdev_nvme_set_preferred_path", 00:07:06.661 "bdev_nvme_get_io_paths", 00:07:06.661 "bdev_nvme_remove_error_injection", 00:07:06.661 "bdev_nvme_add_error_injection", 00:07:06.661 "bdev_nvme_get_discovery_info", 00:07:06.661 "bdev_nvme_stop_discovery", 00:07:06.661 "bdev_nvme_start_discovery", 00:07:06.661 "bdev_nvme_get_controller_health_info", 00:07:06.661 "bdev_nvme_disable_controller", 00:07:06.661 "bdev_nvme_enable_controller", 00:07:06.661 "bdev_nvme_reset_controller", 00:07:06.661 "bdev_nvme_get_transport_statistics", 00:07:06.661 "bdev_nvme_apply_firmware", 00:07:06.661 "bdev_nvme_detach_controller", 00:07:06.661 "bdev_nvme_get_controllers", 00:07:06.661 "bdev_nvme_attach_controller", 00:07:06.661 "bdev_nvme_set_hotplug", 00:07:06.661 "bdev_nvme_set_options", 00:07:06.661 "bdev_passthru_delete", 00:07:06.661 "bdev_passthru_create", 00:07:06.661 "bdev_lvol_set_parent_bdev", 00:07:06.661 "bdev_lvol_set_parent", 00:07:06.661 "bdev_lvol_check_shallow_copy", 00:07:06.661 "bdev_lvol_start_shallow_copy", 00:07:06.661 "bdev_lvol_grow_lvstore", 00:07:06.661 "bdev_lvol_get_lvols", 00:07:06.661 "bdev_lvol_get_lvstores", 00:07:06.661 "bdev_lvol_delete", 00:07:06.661 "bdev_lvol_set_read_only", 00:07:06.661 "bdev_lvol_resize", 00:07:06.661 "bdev_lvol_decouple_parent", 00:07:06.661 "bdev_lvol_inflate", 00:07:06.661 "bdev_lvol_rename", 00:07:06.661 "bdev_lvol_clone_bdev", 00:07:06.661 "bdev_lvol_clone", 00:07:06.661 "bdev_lvol_snapshot", 00:07:06.661 "bdev_lvol_create", 00:07:06.661 "bdev_lvol_delete_lvstore", 00:07:06.661 "bdev_lvol_rename_lvstore", 00:07:06.661 "bdev_lvol_create_lvstore", 00:07:06.661 "bdev_raid_set_options", 00:07:06.661 "bdev_raid_remove_base_bdev", 00:07:06.661 "bdev_raid_add_base_bdev", 00:07:06.661 "bdev_raid_delete", 00:07:06.661 "bdev_raid_create", 00:07:06.661 "bdev_raid_get_bdevs", 00:07:06.661 "bdev_error_inject_error", 00:07:06.661 "bdev_error_delete", 00:07:06.661 "bdev_error_create", 00:07:06.661 "bdev_split_delete", 00:07:06.661 "bdev_split_create", 00:07:06.661 "bdev_delay_delete", 00:07:06.661 "bdev_delay_create", 00:07:06.661 "bdev_delay_update_latency", 00:07:06.661 "bdev_zone_block_delete", 00:07:06.661 "bdev_zone_block_create", 00:07:06.661 "blobfs_create", 00:07:06.661 "blobfs_detect", 00:07:06.661 "blobfs_set_cache_size", 00:07:06.661 "bdev_aio_delete", 00:07:06.661 "bdev_aio_rescan", 00:07:06.661 "bdev_aio_create", 00:07:06.661 "bdev_ftl_set_property", 00:07:06.661 "bdev_ftl_get_properties", 00:07:06.661 "bdev_ftl_get_stats", 00:07:06.661 "bdev_ftl_unmap", 00:07:06.661 "bdev_ftl_unload", 00:07:06.661 "bdev_ftl_delete", 00:07:06.661 "bdev_ftl_load", 00:07:06.661 "bdev_ftl_create", 00:07:06.661 "bdev_virtio_attach_controller", 00:07:06.661 "bdev_virtio_scsi_get_devices", 00:07:06.661 "bdev_virtio_detach_controller", 00:07:06.661 "bdev_virtio_blk_set_hotplug", 00:07:06.661 "bdev_iscsi_delete", 00:07:06.661 "bdev_iscsi_create", 00:07:06.661 "bdev_iscsi_set_options", 00:07:06.661 "bdev_uring_delete", 00:07:06.661 "bdev_uring_rescan", 00:07:06.661 "bdev_uring_create", 00:07:06.661 "accel_error_inject_error", 00:07:06.661 "ioat_scan_accel_module", 00:07:06.661 "dsa_scan_accel_module", 00:07:06.661 "iaa_scan_accel_module", 00:07:06.661 "keyring_file_remove_key", 00:07:06.661 "keyring_file_add_key", 00:07:06.661 "keyring_linux_set_options", 00:07:06.661 "fsdev_aio_delete", 00:07:06.661 "fsdev_aio_create", 00:07:06.661 "iscsi_get_histogram", 00:07:06.661 "iscsi_enable_histogram", 00:07:06.661 "iscsi_set_options", 00:07:06.661 "iscsi_get_auth_groups", 00:07:06.661 "iscsi_auth_group_remove_secret", 00:07:06.661 "iscsi_auth_group_add_secret", 00:07:06.661 "iscsi_delete_auth_group", 00:07:06.661 "iscsi_create_auth_group", 00:07:06.661 "iscsi_set_discovery_auth", 00:07:06.661 "iscsi_get_options", 00:07:06.661 "iscsi_target_node_request_logout", 00:07:06.662 "iscsi_target_node_set_redirect", 00:07:06.662 "iscsi_target_node_set_auth", 00:07:06.662 "iscsi_target_node_add_lun", 00:07:06.662 "iscsi_get_stats", 00:07:06.662 "iscsi_get_connections", 00:07:06.662 "iscsi_portal_group_set_auth", 00:07:06.662 "iscsi_start_portal_group", 00:07:06.662 "iscsi_delete_portal_group", 00:07:06.662 "iscsi_create_portal_group", 00:07:06.662 "iscsi_get_portal_groups", 00:07:06.662 "iscsi_delete_target_node", 00:07:06.662 "iscsi_target_node_remove_pg_ig_maps", 00:07:06.662 "iscsi_target_node_add_pg_ig_maps", 00:07:06.662 "iscsi_create_target_node", 00:07:06.662 "iscsi_get_target_nodes", 00:07:06.662 "iscsi_delete_initiator_group", 00:07:06.662 "iscsi_initiator_group_remove_initiators", 00:07:06.662 "iscsi_initiator_group_add_initiators", 00:07:06.662 "iscsi_create_initiator_group", 00:07:06.662 "iscsi_get_initiator_groups", 00:07:06.662 "nvmf_set_crdt", 00:07:06.662 "nvmf_set_config", 00:07:06.662 "nvmf_set_max_subsystems", 00:07:06.662 "nvmf_stop_mdns_prr", 00:07:06.662 "nvmf_publish_mdns_prr", 00:07:06.662 "nvmf_subsystem_get_listeners", 00:07:06.662 "nvmf_subsystem_get_qpairs", 00:07:06.662 "nvmf_subsystem_get_controllers", 00:07:06.662 "nvmf_get_stats", 00:07:06.662 "nvmf_get_transports", 00:07:06.662 "nvmf_create_transport", 00:07:06.662 "nvmf_get_targets", 00:07:06.662 "nvmf_delete_target", 00:07:06.662 "nvmf_create_target", 00:07:06.662 "nvmf_subsystem_allow_any_host", 00:07:06.662 "nvmf_subsystem_set_keys", 00:07:06.662 "nvmf_subsystem_remove_host", 00:07:06.662 "nvmf_subsystem_add_host", 00:07:06.662 "nvmf_ns_remove_host", 00:07:06.662 "nvmf_ns_add_host", 00:07:06.662 "nvmf_subsystem_remove_ns", 00:07:06.662 "nvmf_subsystem_set_ns_ana_group", 00:07:06.662 "nvmf_subsystem_add_ns", 00:07:06.662 "nvmf_subsystem_listener_set_ana_state", 00:07:06.662 "nvmf_discovery_get_referrals", 00:07:06.662 "nvmf_discovery_remove_referral", 00:07:06.662 "nvmf_discovery_add_referral", 00:07:06.662 "nvmf_subsystem_remove_listener", 00:07:06.662 "nvmf_subsystem_add_listener", 00:07:06.662 "nvmf_delete_subsystem", 00:07:06.662 "nvmf_create_subsystem", 00:07:06.662 "nvmf_get_subsystems", 00:07:06.662 "env_dpdk_get_mem_stats", 00:07:06.662 "nbd_get_disks", 00:07:06.662 "nbd_stop_disk", 00:07:06.662 "nbd_start_disk", 00:07:06.662 "ublk_recover_disk", 00:07:06.662 "ublk_get_disks", 00:07:06.662 "ublk_stop_disk", 00:07:06.662 "ublk_start_disk", 00:07:06.662 "ublk_destroy_target", 00:07:06.662 "ublk_create_target", 00:07:06.662 "virtio_blk_create_transport", 00:07:06.662 "virtio_blk_get_transports", 00:07:06.662 "vhost_controller_set_coalescing", 00:07:06.662 "vhost_get_controllers", 00:07:06.662 "vhost_delete_controller", 00:07:06.662 "vhost_create_blk_controller", 00:07:06.662 "vhost_scsi_controller_remove_target", 00:07:06.662 "vhost_scsi_controller_add_target", 00:07:06.662 "vhost_start_scsi_controller", 00:07:06.662 "vhost_create_scsi_controller", 00:07:06.662 "thread_set_cpumask", 00:07:06.662 "scheduler_set_options", 00:07:06.662 "framework_get_governor", 00:07:06.662 "framework_get_scheduler", 00:07:06.662 "framework_set_scheduler", 00:07:06.662 "framework_get_reactors", 00:07:06.662 "thread_get_io_channels", 00:07:06.662 "thread_get_pollers", 00:07:06.662 "thread_get_stats", 00:07:06.662 "framework_monitor_context_switch", 00:07:06.662 "spdk_kill_instance", 00:07:06.662 "log_enable_timestamps", 00:07:06.662 "log_get_flags", 00:07:06.662 "log_clear_flag", 00:07:06.662 "log_set_flag", 00:07:06.662 "log_get_level", 00:07:06.662 "log_set_level", 00:07:06.662 "log_get_print_level", 00:07:06.662 "log_set_print_level", 00:07:06.662 "framework_enable_cpumask_locks", 00:07:06.662 "framework_disable_cpumask_locks", 00:07:06.662 "framework_wait_init", 00:07:06.662 "framework_start_init", 00:07:06.662 "scsi_get_devices", 00:07:06.662 "bdev_get_histogram", 00:07:06.662 "bdev_enable_histogram", 00:07:06.662 "bdev_set_qos_limit", 00:07:06.662 "bdev_set_qd_sampling_period", 00:07:06.662 "bdev_get_bdevs", 00:07:06.662 "bdev_reset_iostat", 00:07:06.662 "bdev_get_iostat", 00:07:06.662 "bdev_examine", 00:07:06.662 "bdev_wait_for_examine", 00:07:06.662 "bdev_set_options", 00:07:06.662 "accel_get_stats", 00:07:06.662 "accel_set_options", 00:07:06.662 "accel_set_driver", 00:07:06.662 "accel_crypto_key_destroy", 00:07:06.662 "accel_crypto_keys_get", 00:07:06.662 "accel_crypto_key_create", 00:07:06.662 "accel_assign_opc", 00:07:06.662 "accel_get_module_info", 00:07:06.662 "accel_get_opc_assignments", 00:07:06.662 "vmd_rescan", 00:07:06.662 "vmd_remove_device", 00:07:06.662 "vmd_enable", 00:07:06.662 "sock_get_default_impl", 00:07:06.662 "sock_set_default_impl", 00:07:06.662 "sock_impl_set_options", 00:07:06.662 "sock_impl_get_options", 00:07:06.662 "iobuf_get_stats", 00:07:06.662 "iobuf_set_options", 00:07:06.662 "keyring_get_keys", 00:07:06.662 "framework_get_pci_devices", 00:07:06.662 "framework_get_config", 00:07:06.662 "framework_get_subsystems", 00:07:06.662 "fsdev_set_opts", 00:07:06.662 "fsdev_get_opts", 00:07:06.662 "trace_get_info", 00:07:06.662 "trace_get_tpoint_group_mask", 00:07:06.662 "trace_disable_tpoint_group", 00:07:06.662 "trace_enable_tpoint_group", 00:07:06.662 "trace_clear_tpoint_mask", 00:07:06.662 "trace_set_tpoint_mask", 00:07:06.662 "notify_get_notifications", 00:07:06.662 "notify_get_types", 00:07:06.662 "spdk_get_version", 00:07:06.662 "rpc_get_methods" 00:07:06.662 ] 00:07:06.662 20:35:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.662 20:35:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:06.662 20:35:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57956 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57956 ']' 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57956 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57956 00:07:06.662 killing process with pid 57956 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57956' 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57956 00:07:06.662 20:35:01 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57956 00:07:07.229 ************************************ 00:07:07.229 END TEST spdkcli_tcp 00:07:07.229 ************************************ 00:07:07.229 00:07:07.229 real 0m1.924s 00:07:07.229 user 0m3.474s 00:07:07.229 sys 0m0.530s 00:07:07.229 20:35:01 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.229 20:35:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.229 20:35:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:07.229 20:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.229 20:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.229 20:35:02 -- common/autotest_common.sh@10 -- # set +x 00:07:07.229 ************************************ 00:07:07.229 START TEST dpdk_mem_utility 00:07:07.229 ************************************ 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:07.229 * Looking for test storage... 00:07:07.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.229 20:35:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.229 --rc genhtml_branch_coverage=1 00:07:07.229 --rc genhtml_function_coverage=1 00:07:07.229 --rc genhtml_legend=1 00:07:07.229 --rc geninfo_all_blocks=1 00:07:07.229 --rc geninfo_unexecuted_blocks=1 00:07:07.229 00:07:07.229 ' 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.229 --rc genhtml_branch_coverage=1 00:07:07.229 --rc genhtml_function_coverage=1 00:07:07.229 --rc genhtml_legend=1 00:07:07.229 --rc geninfo_all_blocks=1 00:07:07.229 --rc geninfo_unexecuted_blocks=1 00:07:07.229 00:07:07.229 ' 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.229 --rc genhtml_branch_coverage=1 00:07:07.229 --rc genhtml_function_coverage=1 00:07:07.229 --rc genhtml_legend=1 00:07:07.229 --rc geninfo_all_blocks=1 00:07:07.229 --rc geninfo_unexecuted_blocks=1 00:07:07.229 00:07:07.229 ' 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.229 --rc genhtml_branch_coverage=1 00:07:07.229 --rc genhtml_function_coverage=1 00:07:07.229 --rc genhtml_legend=1 00:07:07.229 --rc geninfo_all_blocks=1 00:07:07.229 --rc geninfo_unexecuted_blocks=1 00:07:07.229 00:07:07.229 ' 00:07:07.229 20:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:07.229 20:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58055 00:07:07.229 20:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:07.229 20:35:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58055 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58055 ']' 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.229 20:35:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:07.488 [2024-11-26 20:35:02.282192] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:07.488 [2024-11-26 20:35:02.282551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58055 ] 00:07:07.488 [2024-11-26 20:35:02.438911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.746 [2024-11-26 20:35:02.503692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.746 [2024-11-26 20:35:02.572577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.369 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.369 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:08.369 20:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:08.369 20:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:08.369 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.369 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:08.369 { 00:07:08.369 "filename": "/tmp/spdk_mem_dump.txt" 00:07:08.369 } 00:07:08.369 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.369 20:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:08.630 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:08.630 1 heaps totaling size 818.000000 MiB 00:07:08.630 size: 818.000000 MiB heap id: 0 00:07:08.630 end heaps---------- 00:07:08.630 9 mempools totaling size 603.782043 MiB 00:07:08.630 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:08.630 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:08.630 size: 100.555481 MiB name: bdev_io_58055 00:07:08.630 size: 50.003479 MiB name: msgpool_58055 00:07:08.630 size: 36.509338 MiB name: fsdev_io_58055 00:07:08.630 size: 21.763794 MiB name: PDU_Pool 00:07:08.630 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:08.630 size: 4.133484 MiB name: evtpool_58055 00:07:08.630 size: 0.026123 MiB name: Session_Pool 00:07:08.630 end mempools------- 00:07:08.630 6 memzones totaling size 4.142822 MiB 00:07:08.630 size: 1.000366 MiB name: RG_ring_0_58055 00:07:08.630 size: 1.000366 MiB name: RG_ring_1_58055 00:07:08.630 size: 1.000366 MiB name: RG_ring_4_58055 00:07:08.630 size: 1.000366 MiB name: RG_ring_5_58055 00:07:08.630 size: 0.125366 MiB name: RG_ring_2_58055 00:07:08.630 size: 0.015991 MiB name: RG_ring_3_58055 00:07:08.630 end memzones------- 00:07:08.630 20:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:08.630 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:07:08.630 list of free elements. size: 10.802490 MiB 00:07:08.630 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:08.630 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:08.630 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:08.630 element at address: 0x200000400000 with size: 0.993958 MiB 00:07:08.630 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:08.630 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:08.630 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:08.630 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:08.630 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:07:08.630 element at address: 0x20000a600000 with size: 0.488892 MiB 00:07:08.630 element at address: 0x200000c00000 with size: 0.486267 MiB 00:07:08.630 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:08.630 element at address: 0x200003e00000 with size: 0.480286 MiB 00:07:08.630 element at address: 0x200028200000 with size: 0.395752 MiB 00:07:08.630 element at address: 0x200000800000 with size: 0.351746 MiB 00:07:08.630 list of standard malloc elements. size: 199.268616 MiB 00:07:08.630 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:08.630 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:08.630 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:08.630 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:08.630 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:08.630 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:08.630 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:08.630 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:08.630 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:08.630 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000085e580 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087e840 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087e900 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:08.630 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:07:08.630 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:08.631 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:08.631 element at address: 0x200028265500 with size: 0.000183 MiB 00:07:08.631 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c480 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c540 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c600 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c780 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c840 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c900 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d080 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d140 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d200 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d380 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d440 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d500 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d680 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d740 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d800 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:07:08.631 element at address: 0x20002826d980 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826da40 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826db00 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826de00 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826df80 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e040 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e100 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e280 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e340 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e400 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e580 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e640 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e700 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e880 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826e940 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f000 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f180 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f240 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f300 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f480 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f540 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f600 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f780 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f840 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f900 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:08.632 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:08.632 list of memzone associated elements. size: 607.928894 MiB 00:07:08.632 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:08.632 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:08.632 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:08.632 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:08.632 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:08.632 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58055_0 00:07:08.632 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:08.632 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58055_0 00:07:08.632 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:08.632 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58055_0 00:07:08.632 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:08.632 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:08.632 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:08.632 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:08.632 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:08.632 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58055_0 00:07:08.632 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:08.632 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58055 00:07:08.632 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:08.632 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58055 00:07:08.632 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:08.632 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:08.632 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:08.632 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:08.632 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:08.632 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:08.632 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:08.632 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:08.632 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:08.632 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58055 00:07:08.632 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:08.632 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58055 00:07:08.632 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:08.632 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58055 00:07:08.632 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:08.632 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58055 00:07:08.632 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:08.632 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58055 00:07:08.632 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:08.632 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58055 00:07:08.632 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:08.632 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:08.632 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:08.632 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:08.632 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:08.632 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:08.632 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:08.632 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58055 00:07:08.632 element at address: 0x20000085e640 with size: 0.125488 MiB 00:07:08.632 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58055 00:07:08.632 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:08.632 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:08.632 element at address: 0x200028265680 with size: 0.023743 MiB 00:07:08.632 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:08.632 element at address: 0x20000085a380 with size: 0.016113 MiB 00:07:08.632 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58055 00:07:08.632 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:07:08.632 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:08.632 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:07:08.632 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58055 00:07:08.632 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:08.632 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58055 00:07:08.632 element at address: 0x20000085a180 with size: 0.000305 MiB 00:07:08.632 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58055 00:07:08.632 element at address: 0x20002826c280 with size: 0.000305 MiB 00:07:08.632 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:08.632 20:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:08.632 20:35:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58055 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58055 ']' 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58055 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58055 00:07:08.632 killing process with pid 58055 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58055' 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58055 00:07:08.632 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58055 00:07:08.890 ************************************ 00:07:08.890 END TEST dpdk_mem_utility 00:07:08.890 ************************************ 00:07:08.890 00:07:08.890 real 0m1.813s 00:07:08.890 user 0m1.951s 00:07:08.890 sys 0m0.493s 00:07:08.890 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.890 20:35:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:08.890 20:35:03 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:08.890 20:35:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.890 20:35:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.890 20:35:03 -- common/autotest_common.sh@10 -- # set +x 00:07:08.890 ************************************ 00:07:08.890 START TEST event 00:07:08.890 ************************************ 00:07:08.890 20:35:03 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:09.149 * Looking for test storage... 00:07:09.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:09.149 20:35:03 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.149 20:35:03 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.149 20:35:03 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.149 20:35:04 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.149 20:35:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.149 20:35:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.149 20:35:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.149 20:35:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.149 20:35:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.149 20:35:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.149 20:35:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.149 20:35:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.149 20:35:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.149 20:35:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.149 20:35:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.149 20:35:04 event -- scripts/common.sh@344 -- # case "$op" in 00:07:09.149 20:35:04 event -- scripts/common.sh@345 -- # : 1 00:07:09.149 20:35:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.149 20:35:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.149 20:35:04 event -- scripts/common.sh@365 -- # decimal 1 00:07:09.149 20:35:04 event -- scripts/common.sh@353 -- # local d=1 00:07:09.149 20:35:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.149 20:35:04 event -- scripts/common.sh@355 -- # echo 1 00:07:09.149 20:35:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.149 20:35:04 event -- scripts/common.sh@366 -- # decimal 2 00:07:09.149 20:35:04 event -- scripts/common.sh@353 -- # local d=2 00:07:09.149 20:35:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.149 20:35:04 event -- scripts/common.sh@355 -- # echo 2 00:07:09.149 20:35:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.149 20:35:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.149 20:35:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.149 20:35:04 event -- scripts/common.sh@368 -- # return 0 00:07:09.149 20:35:04 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.149 20:35:04 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.149 --rc genhtml_function_coverage=1 00:07:09.149 --rc genhtml_legend=1 00:07:09.149 --rc geninfo_all_blocks=1 00:07:09.149 --rc geninfo_unexecuted_blocks=1 00:07:09.149 00:07:09.149 ' 00:07:09.149 20:35:04 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.149 --rc genhtml_function_coverage=1 00:07:09.149 --rc genhtml_legend=1 00:07:09.149 --rc geninfo_all_blocks=1 00:07:09.149 --rc geninfo_unexecuted_blocks=1 00:07:09.149 00:07:09.149 ' 00:07:09.149 20:35:04 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.149 --rc genhtml_function_coverage=1 00:07:09.149 --rc genhtml_legend=1 00:07:09.149 --rc geninfo_all_blocks=1 00:07:09.149 --rc geninfo_unexecuted_blocks=1 00:07:09.149 00:07:09.149 ' 00:07:09.149 20:35:04 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.150 --rc genhtml_function_coverage=1 00:07:09.150 --rc genhtml_legend=1 00:07:09.150 --rc geninfo_all_blocks=1 00:07:09.150 --rc geninfo_unexecuted_blocks=1 00:07:09.150 00:07:09.150 ' 00:07:09.150 20:35:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:09.150 20:35:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:09.150 20:35:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:09.150 20:35:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:09.150 20:35:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.150 20:35:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.150 ************************************ 00:07:09.150 START TEST event_perf 00:07:09.150 ************************************ 00:07:09.150 20:35:04 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:09.150 Running I/O for 1 seconds...[2024-11-26 20:35:04.116526] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:09.150 [2024-11-26 20:35:04.116619] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58140 ] 00:07:09.409 [2024-11-26 20:35:04.260618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.409 [2024-11-26 20:35:04.320549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.409 [2024-11-26 20:35:04.320727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.409 [2024-11-26 20:35:04.320790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.409 [2024-11-26 20:35:04.320792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.787 Running I/O for 1 seconds... 00:07:10.787 lcore 0: 168457 00:07:10.787 lcore 1: 168456 00:07:10.787 lcore 2: 168456 00:07:10.787 lcore 3: 168457 00:07:10.787 done. 00:07:10.787 00:07:10.787 real 0m1.276s 00:07:10.787 user 0m4.083s 00:07:10.787 sys 0m0.059s 00:07:10.787 20:35:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.787 20:35:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.787 ************************************ 00:07:10.787 END TEST event_perf 00:07:10.787 ************************************ 00:07:10.787 20:35:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:10.787 20:35:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:10.787 20:35:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.787 20:35:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.787 ************************************ 00:07:10.787 START TEST event_reactor 00:07:10.787 ************************************ 00:07:10.787 20:35:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:10.787 [2024-11-26 20:35:05.455927] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:10.787 [2024-11-26 20:35:05.456048] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:07:10.787 [2024-11-26 20:35:05.607753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.787 [2024-11-26 20:35:05.665721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.721 test_start 00:07:11.721 oneshot 00:07:11.721 tick 100 00:07:11.721 tick 100 00:07:11.721 tick 250 00:07:11.721 tick 100 00:07:11.721 tick 100 00:07:11.721 tick 100 00:07:11.721 tick 250 00:07:11.721 tick 500 00:07:11.721 tick 100 00:07:11.721 tick 100 00:07:11.721 tick 250 00:07:11.721 tick 100 00:07:11.721 tick 100 00:07:11.721 test_end 00:07:11.979 ************************************ 00:07:11.979 END TEST event_reactor 00:07:11.979 ************************************ 00:07:11.979 00:07:11.979 real 0m1.278s 00:07:11.979 user 0m1.118s 00:07:11.979 sys 0m0.053s 00:07:11.979 20:35:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.979 20:35:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:11.979 20:35:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:11.979 20:35:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:11.979 20:35:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.979 20:35:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.979 ************************************ 00:07:11.979 START TEST event_reactor_perf 00:07:11.979 ************************************ 00:07:11.979 20:35:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:11.979 [2024-11-26 20:35:06.804873] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:11.979 [2024-11-26 20:35:06.804957] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58203 ] 00:07:11.979 [2024-11-26 20:35:06.957723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.237 [2024-11-26 20:35:07.021970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.171 test_start 00:07:13.171 test_end 00:07:13.171 Performance: 397616 events per second 00:07:13.171 00:07:13.171 real 0m1.286s 00:07:13.171 user 0m1.129s 00:07:13.171 sys 0m0.049s 00:07:13.171 20:35:08 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.171 ************************************ 00:07:13.171 END TEST event_reactor_perf 00:07:13.171 ************************************ 00:07:13.171 20:35:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.171 20:35:08 event -- event/event.sh@49 -- # uname -s 00:07:13.171 20:35:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:13.171 20:35:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:13.171 20:35:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.171 20:35:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.171 20:35:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.171 ************************************ 00:07:13.171 START TEST event_scheduler 00:07:13.171 ************************************ 00:07:13.171 20:35:08 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:13.429 * Looking for test storage... 00:07:13.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:13.429 20:35:08 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.429 20:35:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.429 20:35:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.429 20:35:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.429 20:35:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.429 20:35:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.429 20:35:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.429 20:35:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.429 20:35:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.429 20:35:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.430 20:35:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.430 --rc genhtml_branch_coverage=1 00:07:13.430 --rc genhtml_function_coverage=1 00:07:13.430 --rc genhtml_legend=1 00:07:13.430 --rc geninfo_all_blocks=1 00:07:13.430 --rc geninfo_unexecuted_blocks=1 00:07:13.430 00:07:13.430 ' 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.430 --rc genhtml_branch_coverage=1 00:07:13.430 --rc genhtml_function_coverage=1 00:07:13.430 --rc genhtml_legend=1 00:07:13.430 --rc geninfo_all_blocks=1 00:07:13.430 --rc geninfo_unexecuted_blocks=1 00:07:13.430 00:07:13.430 ' 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.430 --rc genhtml_branch_coverage=1 00:07:13.430 --rc genhtml_function_coverage=1 00:07:13.430 --rc genhtml_legend=1 00:07:13.430 --rc geninfo_all_blocks=1 00:07:13.430 --rc geninfo_unexecuted_blocks=1 00:07:13.430 00:07:13.430 ' 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.430 --rc genhtml_branch_coverage=1 00:07:13.430 --rc genhtml_function_coverage=1 00:07:13.430 --rc genhtml_legend=1 00:07:13.430 --rc geninfo_all_blocks=1 00:07:13.430 --rc geninfo_unexecuted_blocks=1 00:07:13.430 00:07:13.430 ' 00:07:13.430 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:13.430 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58278 00:07:13.430 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:13.430 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.430 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58278 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58278 ']' 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.430 20:35:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.430 [2024-11-26 20:35:08.409436] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:13.430 [2024-11-26 20:35:08.409816] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58278 ] 00:07:13.689 [2024-11-26 20:35:08.563318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.689 [2024-11-26 20:35:08.627471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.689 [2024-11-26 20:35:08.627646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.689 [2024-11-26 20:35:08.627692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.689 [2024-11-26 20:35:08.627691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:13.948 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:13.948 POWER: Cannot set governor of lcore 0 to userspace 00:07:13.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:13.948 POWER: Cannot set governor of lcore 0 to performance 00:07:13.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:13.948 POWER: Cannot set governor of lcore 0 to userspace 00:07:13.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:13.948 POWER: Cannot set governor of lcore 0 to userspace 00:07:13.948 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:13.948 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:13.948 POWER: Unable to set Power Management Environment for lcore 0 00:07:13.948 [2024-11-26 20:35:08.698660] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:13.948 [2024-11-26 20:35:08.698758] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:13.948 [2024-11-26 20:35:08.698799] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:13.948 [2024-11-26 20:35:08.698868] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:13.948 [2024-11-26 20:35:08.698904] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:13.948 [2024-11-26 20:35:08.698976] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 [2024-11-26 20:35:08.752124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.948 [2024-11-26 20:35:08.783726] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 ************************************ 00:07:13.948 START TEST scheduler_create_thread 00:07:13.948 ************************************ 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 2 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 3 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 4 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 5 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 6 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 7 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 8 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 9 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.948 10 00:07:13.948 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.949 20:35:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.918 20:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.918 20:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:15.918 20:35:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:15.918 20:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.918 20:35:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.487 20:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.487 ************************************ 00:07:16.487 END TEST scheduler_create_thread 00:07:16.487 ************************************ 00:07:16.487 00:07:16.487 real 0m2.610s 00:07:16.487 user 0m0.017s 00:07:16.487 sys 0m0.010s 00:07:16.487 20:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.487 20:35:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.487 20:35:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:16.487 20:35:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58278 00:07:16.487 20:35:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58278 ']' 00:07:16.487 20:35:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58278 00:07:16.487 20:35:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:16.487 20:35:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.487 20:35:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58278 00:07:16.745 killing process with pid 58278 00:07:16.745 20:35:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:16.745 20:35:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:16.745 20:35:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58278' 00:07:16.745 20:35:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58278 00:07:16.745 20:35:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58278 00:07:17.004 [2024-11-26 20:35:11.887601] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:17.264 ************************************ 00:07:17.264 END TEST event_scheduler 00:07:17.264 ************************************ 00:07:17.264 00:07:17.264 real 0m3.950s 00:07:17.264 user 0m5.803s 00:07:17.264 sys 0m0.393s 00:07:17.264 20:35:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.264 20:35:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.264 20:35:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:17.264 20:35:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:17.264 20:35:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.264 20:35:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.264 20:35:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.264 ************************************ 00:07:17.264 START TEST app_repeat 00:07:17.264 ************************************ 00:07:17.264 20:35:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:17.264 Process app_repeat pid: 58370 00:07:17.264 spdk_app_start Round 0 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58370 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58370' 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:17.264 20:35:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:07:17.264 20:35:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:07:17.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.264 20:35:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.264 20:35:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.264 20:35:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.264 20:35:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.264 20:35:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.264 [2024-11-26 20:35:12.183492] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:17.264 [2024-11-26 20:35:12.183606] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58370 ] 00:07:17.524 [2024-11-26 20:35:12.333390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.524 [2024-11-26 20:35:12.391112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.524 [2024-11-26 20:35:12.391116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.524 [2024-11-26 20:35:12.434667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.457 20:35:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.457 20:35:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:18.457 20:35:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.716 Malloc0 00:07:18.716 20:35:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.974 Malloc1 00:07:18.974 20:35:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.974 20:35:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.233 /dev/nbd0 00:07:19.233 20:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.233 20:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.233 1+0 records in 00:07:19.233 1+0 records out 00:07:19.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212272 s, 19.3 MB/s 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.233 20:35:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.233 20:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.233 20:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.233 20:35:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:19.491 /dev/nbd1 00:07:19.491 20:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.491 20:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.491 1+0 records in 00:07:19.491 1+0 records out 00:07:19.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279849 s, 14.6 MB/s 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.491 20:35:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.491 20:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.491 20:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.491 20:35:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.491 20:35:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.491 20:35:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.059 { 00:07:20.059 "nbd_device": "/dev/nbd0", 00:07:20.059 "bdev_name": "Malloc0" 00:07:20.059 }, 00:07:20.059 { 00:07:20.059 "nbd_device": "/dev/nbd1", 00:07:20.059 "bdev_name": "Malloc1" 00:07:20.059 } 00:07:20.059 ]' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.059 { 00:07:20.059 "nbd_device": "/dev/nbd0", 00:07:20.059 "bdev_name": "Malloc0" 00:07:20.059 }, 00:07:20.059 { 00:07:20.059 "nbd_device": "/dev/nbd1", 00:07:20.059 "bdev_name": "Malloc1" 00:07:20.059 } 00:07:20.059 ]' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.059 /dev/nbd1' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.059 /dev/nbd1' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.059 256+0 records in 00:07:20.059 256+0 records out 00:07:20.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676208 s, 155 MB/s 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.059 256+0 records in 00:07:20.059 256+0 records out 00:07:20.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022442 s, 46.7 MB/s 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.059 256+0 records in 00:07:20.059 256+0 records out 00:07:20.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267854 s, 39.1 MB/s 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.059 20:35:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.318 20:35:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.577 20:35:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.837 20:35:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.837 20:35:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.095 20:35:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.353 [2024-11-26 20:35:16.213445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.353 [2024-11-26 20:35:16.271629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.353 [2024-11-26 20:35:16.271633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.353 [2024-11-26 20:35:16.316018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.353 [2024-11-26 20:35:16.316104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.353 [2024-11-26 20:35:16.316118] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.632 spdk_app_start Round 1 00:07:24.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.632 20:35:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.632 20:35:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:24.632 20:35:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.632 20:35:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:24.632 20:35:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.890 Malloc0 00:07:24.890 20:35:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.890 Malloc1 00:07:25.259 20:35:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:25.259 20:35:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:25.260 20:35:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.260 20:35:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:25.260 /dev/nbd0 00:07:25.260 20:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.260 20:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.260 1+0 records in 00:07:25.260 1+0 records out 00:07:25.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526681 s, 7.8 MB/s 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.260 20:35:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:25.260 20:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.260 20:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.260 20:35:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:25.518 /dev/nbd1 00:07:25.518 20:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.518 20:35:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.518 1+0 records in 00:07:25.518 1+0 records out 00:07:25.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421086 s, 9.7 MB/s 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.518 20:35:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:25.518 20:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.518 20:35:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.518 20:35:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.518 20:35:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.518 20:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:25.776 { 00:07:25.776 "nbd_device": "/dev/nbd0", 00:07:25.776 "bdev_name": "Malloc0" 00:07:25.776 }, 00:07:25.776 { 00:07:25.776 "nbd_device": "/dev/nbd1", 00:07:25.776 "bdev_name": "Malloc1" 00:07:25.776 } 00:07:25.776 ]' 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:25.776 { 00:07:25.776 "nbd_device": "/dev/nbd0", 00:07:25.776 "bdev_name": "Malloc0" 00:07:25.776 }, 00:07:25.776 { 00:07:25.776 "nbd_device": "/dev/nbd1", 00:07:25.776 "bdev_name": "Malloc1" 00:07:25.776 } 00:07:25.776 ]' 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:25.776 /dev/nbd1' 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:25.776 /dev/nbd1' 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:25.776 256+0 records in 00:07:25.776 256+0 records out 00:07:25.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00926543 s, 113 MB/s 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:25.776 256+0 records in 00:07:25.776 256+0 records out 00:07:25.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023097 s, 45.4 MB/s 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.776 20:35:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.035 256+0 records in 00:07:26.035 256+0 records out 00:07:26.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255924 s, 41.0 MB/s 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.035 20:35:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.294 20:35:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.552 20:35:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.810 20:35:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:26.810 20:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:26.810 20:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.810 20:35:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:26.810 20:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:26.810 20:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.810 20:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.068 20:35:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.068 20:35:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.068 20:35:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.068 20:35:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.068 20:35:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.068 20:35:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.328 20:35:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:27.328 [2024-11-26 20:35:22.285448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.587 [2024-11-26 20:35:22.341853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.587 [2024-11-26 20:35:22.341859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.587 [2024-11-26 20:35:22.387528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.587 [2024-11-26 20:35:22.387618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.587 [2024-11-26 20:35:22.387631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:30.871 spdk_app_start Round 2 00:07:30.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.871 20:35:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:30.871 20:35:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:30.871 20:35:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.871 20:35:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:30.871 20:35:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.871 Malloc0 00:07:30.871 20:35:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.129 Malloc1 00:07:31.129 20:35:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.129 20:35:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:31.696 /dev/nbd0 00:07:31.696 20:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:31.696 20:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.696 1+0 records in 00:07:31.696 1+0 records out 00:07:31.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229804 s, 17.8 MB/s 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:31.696 20:35:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:31.696 20:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.696 20:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.696 20:35:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:31.954 /dev/nbd1 00:07:31.954 20:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:31.954 20:35:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.954 1+0 records in 00:07:31.954 1+0 records out 00:07:31.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028804 s, 14.2 MB/s 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:31.954 20:35:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:31.954 20:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.954 20:35:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.954 20:35:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.954 20:35:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.954 20:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.213 { 00:07:32.213 "nbd_device": "/dev/nbd0", 00:07:32.213 "bdev_name": "Malloc0" 00:07:32.213 }, 00:07:32.213 { 00:07:32.213 "nbd_device": "/dev/nbd1", 00:07:32.213 "bdev_name": "Malloc1" 00:07:32.213 } 00:07:32.213 ]' 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.213 { 00:07:32.213 "nbd_device": "/dev/nbd0", 00:07:32.213 "bdev_name": "Malloc0" 00:07:32.213 }, 00:07:32.213 { 00:07:32.213 "nbd_device": "/dev/nbd1", 00:07:32.213 "bdev_name": "Malloc1" 00:07:32.213 } 00:07:32.213 ]' 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.213 /dev/nbd1' 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.213 /dev/nbd1' 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.213 256+0 records in 00:07:32.213 256+0 records out 00:07:32.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757691 s, 138 MB/s 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.213 256+0 records in 00:07:32.213 256+0 records out 00:07:32.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295027 s, 35.5 MB/s 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.213 20:35:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.473 256+0 records in 00:07:32.473 256+0 records out 00:07:32.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315081 s, 33.3 MB/s 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.473 20:35:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.732 20:35:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.991 20:35:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.250 20:35:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.250 20:35:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.508 20:35:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.767 [2024-11-26 20:35:28.539571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.767 [2024-11-26 20:35:28.596366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.767 [2024-11-26 20:35:28.596372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.767 [2024-11-26 20:35:28.640848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.767 [2024-11-26 20:35:28.640939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.767 [2024-11-26 20:35:28.640952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:37.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.077 20:35:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:37.077 20:35:31 event.app_repeat -- event/event.sh@39 -- # killprocess 58370 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58370 ']' 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58370 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58370 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.077 killing process with pid 58370 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58370' 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58370 00:07:37.077 20:35:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58370 00:07:37.077 spdk_app_start is called in Round 0. 00:07:37.077 Shutdown signal received, stop current app iteration 00:07:37.077 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:07:37.077 spdk_app_start is called in Round 1. 00:07:37.077 Shutdown signal received, stop current app iteration 00:07:37.077 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:07:37.078 spdk_app_start is called in Round 2. 00:07:37.078 Shutdown signal received, stop current app iteration 00:07:37.078 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:07:37.078 spdk_app_start is called in Round 3. 00:07:37.078 Shutdown signal received, stop current app iteration 00:07:37.078 20:35:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:37.078 20:35:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:37.078 ************************************ 00:07:37.078 END TEST app_repeat 00:07:37.078 ************************************ 00:07:37.078 00:07:37.078 real 0m19.792s 00:07:37.078 user 0m44.626s 00:07:37.078 sys 0m3.489s 00:07:37.078 20:35:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.078 20:35:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.078 20:35:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:37.078 20:35:31 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:37.078 20:35:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.078 20:35:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.078 20:35:31 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.078 ************************************ 00:07:37.078 START TEST cpu_locks 00:07:37.078 ************************************ 00:07:37.078 20:35:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:37.078 * Looking for test storage... 00:07:37.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:37.336 20:35:32 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.336 20:35:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.336 20:35:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.336 20:35:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:37.336 20:35:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.337 20:35:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.337 --rc genhtml_branch_coverage=1 00:07:37.337 --rc genhtml_function_coverage=1 00:07:37.337 --rc genhtml_legend=1 00:07:37.337 --rc geninfo_all_blocks=1 00:07:37.337 --rc geninfo_unexecuted_blocks=1 00:07:37.337 00:07:37.337 ' 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.337 --rc genhtml_branch_coverage=1 00:07:37.337 --rc genhtml_function_coverage=1 00:07:37.337 --rc genhtml_legend=1 00:07:37.337 --rc geninfo_all_blocks=1 00:07:37.337 --rc geninfo_unexecuted_blocks=1 00:07:37.337 00:07:37.337 ' 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.337 --rc genhtml_branch_coverage=1 00:07:37.337 --rc genhtml_function_coverage=1 00:07:37.337 --rc genhtml_legend=1 00:07:37.337 --rc geninfo_all_blocks=1 00:07:37.337 --rc geninfo_unexecuted_blocks=1 00:07:37.337 00:07:37.337 ' 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.337 --rc genhtml_branch_coverage=1 00:07:37.337 --rc genhtml_function_coverage=1 00:07:37.337 --rc genhtml_legend=1 00:07:37.337 --rc geninfo_all_blocks=1 00:07:37.337 --rc geninfo_unexecuted_blocks=1 00:07:37.337 00:07:37.337 ' 00:07:37.337 20:35:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:37.337 20:35:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:37.337 20:35:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:37.337 20:35:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.337 20:35:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.337 ************************************ 00:07:37.337 START TEST default_locks 00:07:37.337 ************************************ 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58816 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58816 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58816 ']' 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.337 20:35:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.337 [2024-11-26 20:35:32.261138] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:37.337 [2024-11-26 20:35:32.261509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58816 ] 00:07:37.596 [2024-11-26 20:35:32.425356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.596 [2024-11-26 20:35:32.489833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.596 [2024-11-26 20:35:32.559414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.533 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.533 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:38.533 20:35:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58816 00:07:38.533 20:35:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58816 00:07:38.533 20:35:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.791 20:35:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58816 00:07:38.792 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58816 ']' 00:07:38.792 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58816 00:07:38.792 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:38.792 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.792 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58816 00:07:39.050 killing process with pid 58816 00:07:39.050 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.050 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.050 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58816' 00:07:39.050 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58816 00:07:39.050 20:35:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58816 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58816 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58816 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58816 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58816 ']' 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.308 ERROR: process (pid: 58816) is no longer running 00:07:39.308 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58816) - No such process 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.308 00:07:39.308 real 0m1.972s 00:07:39.308 user 0m2.176s 00:07:39.308 sys 0m0.619s 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.308 ************************************ 00:07:39.308 END TEST default_locks 00:07:39.308 ************************************ 00:07:39.308 20:35:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.308 20:35:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:39.308 20:35:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.308 20:35:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.308 20:35:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.309 ************************************ 00:07:39.309 START TEST default_locks_via_rpc 00:07:39.309 ************************************ 00:07:39.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58868 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58868 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58868 ']' 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.309 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.309 [2024-11-26 20:35:34.274025] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:39.309 [2024-11-26 20:35:34.274372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58868 ] 00:07:39.567 [2024-11-26 20:35:34.421438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.567 [2024-11-26 20:35:34.481034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.567 [2024-11-26 20:35:34.544377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58868 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58868 00:07:39.824 20:35:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58868 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58868 ']' 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58868 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58868 00:07:40.392 killing process with pid 58868 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58868' 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58868 00:07:40.392 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58868 00:07:40.959 00:07:40.959 real 0m1.479s 00:07:40.959 user 0m1.484s 00:07:40.959 sys 0m0.610s 00:07:40.959 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.959 20:35:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.959 ************************************ 00:07:40.959 END TEST default_locks_via_rpc 00:07:40.959 ************************************ 00:07:40.959 20:35:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:40.959 20:35:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.959 20:35:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.959 20:35:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.959 ************************************ 00:07:40.959 START TEST non_locking_app_on_locked_coremask 00:07:40.959 ************************************ 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58912 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58912 /var/tmp/spdk.sock 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58912 ']' 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.959 20:35:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.960 [2024-11-26 20:35:35.826338] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:40.960 [2024-11-26 20:35:35.826926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:07:41.219 [2024-11-26 20:35:35.986867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.219 [2024-11-26 20:35:36.051753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.219 [2024-11-26 20:35:36.118768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58928 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58928 /var/tmp/spdk2.sock 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58928 ']' 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.155 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.156 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.156 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.156 20:35:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.156 [2024-11-26 20:35:36.949449] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:42.156 [2024-11-26 20:35:36.949825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:07:42.156 [2024-11-26 20:35:37.111847] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.156 [2024-11-26 20:35:37.111929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.413 [2024-11-26 20:35:37.233396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.414 [2024-11-26 20:35:37.358912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.345 20:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.345 20:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:43.345 20:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58912 00:07:43.345 20:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.345 20:35:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58912 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58912 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58912 ']' 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58912 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58912 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.280 killing process with pid 58912 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58912' 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58912 00:07:44.280 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58912 00:07:44.845 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58928 00:07:44.845 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58928 ']' 00:07:44.845 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58928 00:07:44.845 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:44.845 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.845 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58928 00:07:45.102 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.102 killing process with pid 58928 00:07:45.102 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.102 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58928' 00:07:45.102 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58928 00:07:45.102 20:35:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58928 00:07:45.360 ************************************ 00:07:45.360 END TEST non_locking_app_on_locked_coremask 00:07:45.360 ************************************ 00:07:45.360 00:07:45.360 real 0m4.447s 00:07:45.360 user 0m5.124s 00:07:45.360 sys 0m1.298s 00:07:45.360 20:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.360 20:35:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.360 20:35:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:45.360 20:35:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.360 20:35:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.360 20:35:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.360 ************************************ 00:07:45.360 START TEST locking_app_on_unlocked_coremask 00:07:45.360 ************************************ 00:07:45.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59000 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59000 /var/tmp/spdk.sock 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59000 ']' 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.360 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.360 [2024-11-26 20:35:40.325906] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:45.360 [2024-11-26 20:35:40.326051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59000 ] 00:07:45.617 [2024-11-26 20:35:40.485012] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.617 [2024-11-26 20:35:40.485375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.617 [2024-11-26 20:35:40.550766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.874 [2024-11-26 20:35:40.623041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59009 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59009 /var/tmp/spdk2.sock 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59009 ']' 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.874 20:35:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 [2024-11-26 20:35:40.923184] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:46.248 [2024-11-26 20:35:40.923553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59009 ] 00:07:46.248 [2024-11-26 20:35:41.088862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.248 [2024-11-26 20:35:41.206046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.519 [2024-11-26 20:35:41.333426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.776 20:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.776 20:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:46.776 20:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59009 00:07:46.776 20:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59009 00:07:46.776 20:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59000 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59000 ']' 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59000 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59000 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.148 killing process with pid 59000 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59000' 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59000 00:07:48.148 20:35:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59000 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59009 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59009 ']' 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59009 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59009 00:07:48.716 killing process with pid 59009 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59009' 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59009 00:07:48.716 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59009 00:07:48.974 ************************************ 00:07:48.974 END TEST locking_app_on_unlocked_coremask 00:07:48.974 ************************************ 00:07:48.974 00:07:48.974 real 0m3.620s 00:07:48.974 user 0m3.910s 00:07:48.974 sys 0m1.284s 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 20:35:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:48.974 20:35:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.974 20:35:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.974 20:35:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 ************************************ 00:07:48.974 START TEST locking_app_on_locked_coremask 00:07:48.974 ************************************ 00:07:48.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59074 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59074 /var/tmp/spdk.sock 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59074 ']' 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.974 20:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.233 [2024-11-26 20:35:43.995837] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:49.233 [2024-11-26 20:35:43.996226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59074 ] 00:07:49.233 [2024-11-26 20:35:44.155667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.491 [2024-11-26 20:35:44.236090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.491 [2024-11-26 20:35:44.313345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59090 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59090 /var/tmp/spdk2.sock 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59090 /var/tmp/spdk2.sock 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59090 /var/tmp/spdk2.sock 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59090 ']' 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.427 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.427 [2024-11-26 20:35:45.149634] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:50.427 [2024-11-26 20:35:45.149958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59090 ] 00:07:50.427 [2024-11-26 20:35:45.312458] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59074 has claimed it. 00:07:50.427 [2024-11-26 20:35:45.312547] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:50.996 ERROR: process (pid: 59090) is no longer running 00:07:50.996 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59090) - No such process 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59074 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59074 00:07:50.996 20:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59074 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59074 ']' 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59074 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59074 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.562 killing process with pid 59074 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59074' 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59074 00:07:51.562 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59074 00:07:51.820 00:07:51.820 real 0m2.859s 00:07:51.820 user 0m3.421s 00:07:51.820 sys 0m0.737s 00:07:51.820 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.820 ************************************ 00:07:51.820 20:35:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.820 END TEST locking_app_on_locked_coremask 00:07:51.821 ************************************ 00:07:52.078 20:35:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:52.078 20:35:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.079 20:35:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.079 20:35:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.079 ************************************ 00:07:52.079 START TEST locking_overlapped_coremask 00:07:52.079 ************************************ 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:52.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59135 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59135 /var/tmp/spdk.sock 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59135 ']' 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.079 20:35:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.079 [2024-11-26 20:35:46.920173] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:52.079 [2024-11-26 20:35:46.920579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59135 ] 00:07:52.337 [2024-11-26 20:35:47.076102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.337 [2024-11-26 20:35:47.138140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.337 [2024-11-26 20:35:47.138367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.337 [2024-11-26 20:35:47.138492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.337 [2024-11-26 20:35:47.202312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.594 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.594 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:52.594 20:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59146 00:07:52.594 20:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59146 /var/tmp/spdk2.sock 00:07:52.594 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:52.594 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59146 /var/tmp/spdk2.sock 00:07:52.594 20:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59146 /var/tmp/spdk2.sock 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59146 ']' 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.595 20:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.595 [2024-11-26 20:35:47.469734] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:52.595 [2024-11-26 20:35:47.470090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:07:52.852 [2024-11-26 20:35:47.639054] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59135 has claimed it. 00:07:52.852 [2024-11-26 20:35:47.639142] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:53.418 ERROR: process (pid: 59146) is no longer running 00:07:53.418 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59146) - No such process 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59135 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59135 ']' 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59135 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59135 00:07:53.418 killing process with pid 59135 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59135' 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59135 00:07:53.418 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59135 00:07:53.677 ************************************ 00:07:53.677 END TEST locking_overlapped_coremask 00:07:53.677 ************************************ 00:07:53.677 00:07:53.677 real 0m1.813s 00:07:53.677 user 0m5.006s 00:07:53.677 sys 0m0.436s 00:07:53.677 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.677 20:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.936 20:35:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:53.936 20:35:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.936 20:35:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.936 20:35:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.936 ************************************ 00:07:53.936 START TEST locking_overlapped_coremask_via_rpc 00:07:53.936 ************************************ 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:53.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59191 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59191 /var/tmp/spdk.sock 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59191 ']' 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.936 20:35:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.936 [2024-11-26 20:35:48.797145] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:53.936 [2024-11-26 20:35:48.797558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59191 ] 00:07:54.195 [2024-11-26 20:35:48.954830] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.195 [2024-11-26 20:35:48.955178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.195 [2024-11-26 20:35:49.030205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.195 [2024-11-26 20:35:49.030305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.195 [2024-11-26 20:35:49.030311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.195 [2024-11-26 20:35:49.099676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59209 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59209 /var/tmp/spdk2.sock 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59209 ']' 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.151 20:35:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.151 [2024-11-26 20:35:49.949415] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:55.151 [2024-11-26 20:35:49.949888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59209 ] 00:07:55.151 [2024-11-26 20:35:50.116180] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:55.151 [2024-11-26 20:35:50.116255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.409 [2024-11-26 20:35:50.236938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.409 [2024-11-26 20:35:50.237002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.409 [2024-11-26 20:35:50.237001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.409 [2024-11-26 20:35:50.372530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.343 20:35:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.343 [2024-11-26 20:35:51.000347] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59191 has claimed it. 00:07:56.343 request: 00:07:56.343 { 00:07:56.343 "method": "framework_enable_cpumask_locks", 00:07:56.343 "req_id": 1 00:07:56.343 } 00:07:56.343 Got JSON-RPC error response 00:07:56.343 response: 00:07:56.343 { 00:07:56.343 "code": -32603, 00:07:56.343 "message": "Failed to claim CPU core: 2" 00:07:56.343 } 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59191 /var/tmp/spdk.sock 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59191 ']' 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.343 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59209 /var/tmp/spdk2.sock 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59209 ']' 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.602 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:56.860 ************************************ 00:07:56.860 END TEST locking_overlapped_coremask_via_rpc 00:07:56.860 ************************************ 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:56.860 00:07:56.860 real 0m2.931s 00:07:56.860 user 0m1.613s 00:07:56.860 sys 0m0.232s 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.860 20:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.860 20:35:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:56.860 20:35:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59191 ]] 00:07:56.860 20:35:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59191 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59191 ']' 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59191 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59191 00:07:56.860 killing process with pid 59191 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59191' 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59191 00:07:56.860 20:35:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59191 00:07:57.119 20:35:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59209 ]] 00:07:57.119 20:35:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59209 00:07:57.119 20:35:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59209 ']' 00:07:57.119 20:35:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59209 00:07:57.119 20:35:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:57.119 20:35:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.119 20:35:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59209 00:07:57.376 killing process with pid 59209 00:07:57.377 20:35:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:57.377 20:35:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:57.377 20:35:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59209' 00:07:57.377 20:35:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59209 00:07:57.377 20:35:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59209 00:07:57.635 20:35:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.635 20:35:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:57.635 20:35:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59191 ]] 00:07:57.635 20:35:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59191 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59191 ']' 00:07:57.635 Process with pid 59191 is not found 00:07:57.635 Process with pid 59209 is not found 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59191 00:07:57.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59191) - No such process 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59191 is not found' 00:07:57.635 20:35:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59209 ]] 00:07:57.635 20:35:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59209 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59209 ']' 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59209 00:07:57.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59209) - No such process 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59209 is not found' 00:07:57.635 20:35:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.635 00:07:57.635 real 0m20.506s 00:07:57.635 user 0m36.494s 00:07:57.635 sys 0m6.169s 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.635 20:35:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.635 ************************************ 00:07:57.635 END TEST cpu_locks 00:07:57.635 ************************************ 00:07:57.635 00:07:57.635 real 0m48.665s 00:07:57.635 user 1m33.490s 00:07:57.635 sys 0m10.546s 00:07:57.635 20:35:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.635 ************************************ 00:07:57.635 END TEST event 00:07:57.635 ************************************ 00:07:57.635 20:35:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.635 20:35:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:57.635 20:35:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.635 20:35:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.635 20:35:52 -- common/autotest_common.sh@10 -- # set +x 00:07:57.635 ************************************ 00:07:57.635 START TEST thread 00:07:57.635 ************************************ 00:07:57.635 20:35:52 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:57.896 * Looking for test storage... 00:07:57.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.896 20:35:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.896 20:35:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.896 20:35:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.896 20:35:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.896 20:35:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.896 20:35:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.896 20:35:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.896 20:35:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.896 20:35:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.896 20:35:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.896 20:35:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.896 20:35:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:57.896 20:35:52 thread -- scripts/common.sh@345 -- # : 1 00:07:57.896 20:35:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.896 20:35:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.896 20:35:52 thread -- scripts/common.sh@365 -- # decimal 1 00:07:57.896 20:35:52 thread -- scripts/common.sh@353 -- # local d=1 00:07:57.896 20:35:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.896 20:35:52 thread -- scripts/common.sh@355 -- # echo 1 00:07:57.896 20:35:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.896 20:35:52 thread -- scripts/common.sh@366 -- # decimal 2 00:07:57.896 20:35:52 thread -- scripts/common.sh@353 -- # local d=2 00:07:57.896 20:35:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.896 20:35:52 thread -- scripts/common.sh@355 -- # echo 2 00:07:57.896 20:35:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.896 20:35:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.896 20:35:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.896 20:35:52 thread -- scripts/common.sh@368 -- # return 0 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.896 --rc genhtml_branch_coverage=1 00:07:57.896 --rc genhtml_function_coverage=1 00:07:57.896 --rc genhtml_legend=1 00:07:57.896 --rc geninfo_all_blocks=1 00:07:57.896 --rc geninfo_unexecuted_blocks=1 00:07:57.896 00:07:57.896 ' 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.896 --rc genhtml_branch_coverage=1 00:07:57.896 --rc genhtml_function_coverage=1 00:07:57.896 --rc genhtml_legend=1 00:07:57.896 --rc geninfo_all_blocks=1 00:07:57.896 --rc geninfo_unexecuted_blocks=1 00:07:57.896 00:07:57.896 ' 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.896 --rc genhtml_branch_coverage=1 00:07:57.896 --rc genhtml_function_coverage=1 00:07:57.896 --rc genhtml_legend=1 00:07:57.896 --rc geninfo_all_blocks=1 00:07:57.896 --rc geninfo_unexecuted_blocks=1 00:07:57.896 00:07:57.896 ' 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.896 --rc genhtml_branch_coverage=1 00:07:57.896 --rc genhtml_function_coverage=1 00:07:57.896 --rc genhtml_legend=1 00:07:57.896 --rc geninfo_all_blocks=1 00:07:57.896 --rc geninfo_unexecuted_blocks=1 00:07:57.896 00:07:57.896 ' 00:07:57.896 20:35:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.896 20:35:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.896 ************************************ 00:07:57.896 START TEST thread_poller_perf 00:07:57.896 ************************************ 00:07:57.896 20:35:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:57.896 [2024-11-26 20:35:52.822698] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:57.896 [2024-11-26 20:35:52.823006] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59345 ] 00:07:58.154 [2024-11-26 20:35:52.970579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.154 [2024-11-26 20:35:53.046779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.154 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:59.530 [2024-11-26T20:35:54.523Z] ====================================== 00:07:59.530 [2024-11-26T20:35:54.523Z] busy:2109053334 (cyc) 00:07:59.530 [2024-11-26T20:35:54.523Z] total_run_count: 348000 00:07:59.530 [2024-11-26T20:35:54.523Z] tsc_hz: 2100000000 (cyc) 00:07:59.530 [2024-11-26T20:35:54.523Z] ====================================== 00:07:59.530 [2024-11-26T20:35:54.523Z] poller_cost: 6060 (cyc), 2885 (nsec) 00:07:59.530 00:07:59.530 real 0m1.301s 00:07:59.530 user 0m1.135s 00:07:59.530 sys 0m0.058s 00:07:59.530 20:35:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.530 20:35:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.530 ************************************ 00:07:59.530 END TEST thread_poller_perf 00:07:59.530 ************************************ 00:07:59.530 20:35:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:59.530 20:35:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:59.530 20:35:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.530 20:35:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.530 ************************************ 00:07:59.530 START TEST thread_poller_perf 00:07:59.530 ************************************ 00:07:59.531 20:35:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:59.531 [2024-11-26 20:35:54.189059] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:59.531 [2024-11-26 20:35:54.189429] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59381 ] 00:07:59.531 [2024-11-26 20:35:54.343127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.531 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:59.531 [2024-11-26 20:35:54.398531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.467 [2024-11-26T20:35:55.460Z] ====================================== 00:08:00.467 [2024-11-26T20:35:55.460Z] busy:2101784304 (cyc) 00:08:00.467 [2024-11-26T20:35:55.460Z] total_run_count: 4780000 00:08:00.467 [2024-11-26T20:35:55.460Z] tsc_hz: 2100000000 (cyc) 00:08:00.467 [2024-11-26T20:35:55.460Z] ====================================== 00:08:00.467 [2024-11-26T20:35:55.460Z] poller_cost: 439 (cyc), 209 (nsec) 00:08:00.467 00:08:00.467 real 0m1.277s 00:08:00.467 user 0m1.121s 00:08:00.467 sys 0m0.050s 00:08:00.467 20:35:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.467 ************************************ 00:08:00.467 END TEST thread_poller_perf 00:08:00.467 20:35:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.467 ************************************ 00:08:00.726 20:35:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:00.726 ************************************ 00:08:00.726 END TEST thread 00:08:00.726 ************************************ 00:08:00.726 00:08:00.726 real 0m2.890s 00:08:00.726 user 0m2.397s 00:08:00.726 sys 0m0.281s 00:08:00.726 20:35:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.727 20:35:55 thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.727 20:35:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:00.727 20:35:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:00.727 20:35:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.727 20:35:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.727 20:35:55 -- common/autotest_common.sh@10 -- # set +x 00:08:00.727 ************************************ 00:08:00.727 START TEST app_cmdline 00:08:00.727 ************************************ 00:08:00.727 20:35:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:00.727 * Looking for test storage... 00:08:00.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:00.727 20:35:55 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.727 20:35:55 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.727 20:35:55 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.985 20:35:55 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:00.985 20:35:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.986 20:35:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:00.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.986 20:35:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.986 20:35:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.986 20:35:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.986 20:35:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.986 --rc genhtml_branch_coverage=1 00:08:00.986 --rc genhtml_function_coverage=1 00:08:00.986 --rc genhtml_legend=1 00:08:00.986 --rc geninfo_all_blocks=1 00:08:00.986 --rc geninfo_unexecuted_blocks=1 00:08:00.986 00:08:00.986 ' 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.986 --rc genhtml_branch_coverage=1 00:08:00.986 --rc genhtml_function_coverage=1 00:08:00.986 --rc genhtml_legend=1 00:08:00.986 --rc geninfo_all_blocks=1 00:08:00.986 --rc geninfo_unexecuted_blocks=1 00:08:00.986 00:08:00.986 ' 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.986 --rc genhtml_branch_coverage=1 00:08:00.986 --rc genhtml_function_coverage=1 00:08:00.986 --rc genhtml_legend=1 00:08:00.986 --rc geninfo_all_blocks=1 00:08:00.986 --rc geninfo_unexecuted_blocks=1 00:08:00.986 00:08:00.986 ' 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.986 --rc genhtml_branch_coverage=1 00:08:00.986 --rc genhtml_function_coverage=1 00:08:00.986 --rc genhtml_legend=1 00:08:00.986 --rc geninfo_all_blocks=1 00:08:00.986 --rc geninfo_unexecuted_blocks=1 00:08:00.986 00:08:00.986 ' 00:08:00.986 20:35:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:00.986 20:35:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59459 00:08:00.986 20:35:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59459 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59459 ']' 00:08:00.986 20:35:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.986 20:35:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.986 [2024-11-26 20:35:55.809555] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:00.986 [2024-11-26 20:35:55.809892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59459 ] 00:08:00.986 [2024-11-26 20:35:55.954650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.245 [2024-11-26 20:35:56.009564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.245 [2024-11-26 20:35:56.081804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.512 20:35:56 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.512 20:35:56 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:01.512 20:35:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:01.512 { 00:08:01.512 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:08:01.512 "fields": { 00:08:01.512 "major": 25, 00:08:01.512 "minor": 1, 00:08:01.512 "patch": 0, 00:08:01.512 "suffix": "-pre", 00:08:01.512 "commit": "2f2acf4eb" 00:08:01.512 } 00:08:01.512 } 00:08:01.512 20:35:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:01.512 20:35:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:01.512 20:35:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:01.512 20:35:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:01.512 20:35:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:01.512 20:35:56 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.512 20:35:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.795 20:35:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:01.795 20:35:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:01.795 20:35:56 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.795 20:35:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:01.796 20:35:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:01.796 20:35:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.796 request: 00:08:01.796 { 00:08:01.796 "method": "env_dpdk_get_mem_stats", 00:08:01.796 "req_id": 1 00:08:01.796 } 00:08:01.796 Got JSON-RPC error response 00:08:01.796 response: 00:08:01.796 { 00:08:01.796 "code": -32601, 00:08:01.796 "message": "Method not found" 00:08:01.796 } 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.796 20:35:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59459 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59459 ']' 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59459 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.796 20:35:56 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59459 00:08:02.054 killing process with pid 59459 00:08:02.054 20:35:56 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.054 20:35:56 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.054 20:35:56 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59459' 00:08:02.054 20:35:56 app_cmdline -- common/autotest_common.sh@973 -- # kill 59459 00:08:02.054 20:35:56 app_cmdline -- common/autotest_common.sh@978 -- # wait 59459 00:08:02.311 00:08:02.311 real 0m1.601s 00:08:02.311 user 0m1.866s 00:08:02.311 sys 0m0.474s 00:08:02.311 ************************************ 00:08:02.311 END TEST app_cmdline 00:08:02.311 ************************************ 00:08:02.311 20:35:57 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.311 20:35:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:02.311 20:35:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.311 20:35:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.311 20:35:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.311 20:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:02.311 ************************************ 00:08:02.311 START TEST version 00:08:02.311 ************************************ 00:08:02.311 20:35:57 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.311 * Looking for test storage... 00:08:02.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.570 20:35:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.570 20:35:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.570 20:35:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.570 20:35:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.570 20:35:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.570 20:35:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.570 20:35:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.570 20:35:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.570 20:35:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.570 20:35:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.570 20:35:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.570 20:35:57 version -- scripts/common.sh@344 -- # case "$op" in 00:08:02.570 20:35:57 version -- scripts/common.sh@345 -- # : 1 00:08:02.570 20:35:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.570 20:35:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.570 20:35:57 version -- scripts/common.sh@365 -- # decimal 1 00:08:02.570 20:35:57 version -- scripts/common.sh@353 -- # local d=1 00:08:02.570 20:35:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.570 20:35:57 version -- scripts/common.sh@355 -- # echo 1 00:08:02.570 20:35:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.570 20:35:57 version -- scripts/common.sh@366 -- # decimal 2 00:08:02.570 20:35:57 version -- scripts/common.sh@353 -- # local d=2 00:08:02.570 20:35:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.570 20:35:57 version -- scripts/common.sh@355 -- # echo 2 00:08:02.570 20:35:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.570 20:35:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.570 20:35:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.570 20:35:57 version -- scripts/common.sh@368 -- # return 0 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.570 --rc genhtml_branch_coverage=1 00:08:02.570 --rc genhtml_function_coverage=1 00:08:02.570 --rc genhtml_legend=1 00:08:02.570 --rc geninfo_all_blocks=1 00:08:02.570 --rc geninfo_unexecuted_blocks=1 00:08:02.570 00:08:02.570 ' 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.570 --rc genhtml_branch_coverage=1 00:08:02.570 --rc genhtml_function_coverage=1 00:08:02.570 --rc genhtml_legend=1 00:08:02.570 --rc geninfo_all_blocks=1 00:08:02.570 --rc geninfo_unexecuted_blocks=1 00:08:02.570 00:08:02.570 ' 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.570 --rc genhtml_branch_coverage=1 00:08:02.570 --rc genhtml_function_coverage=1 00:08:02.570 --rc genhtml_legend=1 00:08:02.570 --rc geninfo_all_blocks=1 00:08:02.570 --rc geninfo_unexecuted_blocks=1 00:08:02.570 00:08:02.570 ' 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.570 --rc genhtml_branch_coverage=1 00:08:02.570 --rc genhtml_function_coverage=1 00:08:02.570 --rc genhtml_legend=1 00:08:02.570 --rc geninfo_all_blocks=1 00:08:02.570 --rc geninfo_unexecuted_blocks=1 00:08:02.570 00:08:02.570 ' 00:08:02.570 20:35:57 version -- app/version.sh@17 -- # get_header_version major 00:08:02.570 20:35:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # cut -f2 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.570 20:35:57 version -- app/version.sh@17 -- # major=25 00:08:02.570 20:35:57 version -- app/version.sh@18 -- # get_header_version minor 00:08:02.570 20:35:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # cut -f2 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.570 20:35:57 version -- app/version.sh@18 -- # minor=1 00:08:02.570 20:35:57 version -- app/version.sh@19 -- # get_header_version patch 00:08:02.570 20:35:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # cut -f2 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.570 20:35:57 version -- app/version.sh@19 -- # patch=0 00:08:02.570 20:35:57 version -- app/version.sh@20 -- # get_header_version suffix 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.570 20:35:57 version -- app/version.sh@14 -- # cut -f2 00:08:02.570 20:35:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.570 20:35:57 version -- app/version.sh@20 -- # suffix=-pre 00:08:02.570 20:35:57 version -- app/version.sh@22 -- # version=25.1 00:08:02.570 20:35:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:02.570 20:35:57 version -- app/version.sh@28 -- # version=25.1rc0 00:08:02.570 20:35:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:02.570 20:35:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:02.570 20:35:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:02.570 20:35:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:02.570 ************************************ 00:08:02.570 END TEST version 00:08:02.570 ************************************ 00:08:02.570 00:08:02.570 real 0m0.286s 00:08:02.570 user 0m0.181s 00:08:02.570 sys 0m0.148s 00:08:02.570 20:35:57 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.570 20:35:57 version -- common/autotest_common.sh@10 -- # set +x 00:08:02.570 20:35:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:02.570 20:35:57 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:02.570 20:35:57 -- spdk/autotest.sh@194 -- # uname -s 00:08:02.829 20:35:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:02.829 20:35:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:02.829 20:35:57 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:02.829 20:35:57 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:02.829 20:35:57 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:02.829 20:35:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.829 20:35:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.829 20:35:57 -- common/autotest_common.sh@10 -- # set +x 00:08:02.829 ************************************ 00:08:02.829 START TEST spdk_dd 00:08:02.829 ************************************ 00:08:02.829 20:35:57 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:02.829 * Looking for test storage... 00:08:02.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.829 20:35:57 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.829 20:35:57 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.829 20:35:57 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.829 20:35:57 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@345 -- # : 1 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:08:02.829 20:35:57 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.830 20:35:57 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.830 20:35:57 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.830 20:35:57 spdk_dd -- scripts/common.sh@368 -- # return 0 00:08:02.830 20:35:57 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.830 20:35:57 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.830 --rc genhtml_branch_coverage=1 00:08:02.830 --rc genhtml_function_coverage=1 00:08:02.830 --rc genhtml_legend=1 00:08:02.830 --rc geninfo_all_blocks=1 00:08:02.830 --rc geninfo_unexecuted_blocks=1 00:08:02.830 00:08:02.830 ' 00:08:02.830 20:35:57 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.830 --rc genhtml_branch_coverage=1 00:08:02.830 --rc genhtml_function_coverage=1 00:08:02.830 --rc genhtml_legend=1 00:08:02.830 --rc geninfo_all_blocks=1 00:08:02.830 --rc geninfo_unexecuted_blocks=1 00:08:02.830 00:08:02.830 ' 00:08:02.830 20:35:57 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.830 --rc genhtml_branch_coverage=1 00:08:02.830 --rc genhtml_function_coverage=1 00:08:02.830 --rc genhtml_legend=1 00:08:02.830 --rc geninfo_all_blocks=1 00:08:02.830 --rc geninfo_unexecuted_blocks=1 00:08:02.830 00:08:02.830 ' 00:08:02.830 20:35:57 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.830 --rc genhtml_branch_coverage=1 00:08:02.830 --rc genhtml_function_coverage=1 00:08:02.830 --rc genhtml_legend=1 00:08:02.830 --rc geninfo_all_blocks=1 00:08:02.830 --rc geninfo_unexecuted_blocks=1 00:08:02.830 00:08:02.830 ' 00:08:02.830 20:35:57 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.830 20:35:57 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.830 20:35:57 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.830 20:35:57 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.830 20:35:57 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.830 20:35:57 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.830 20:35:57 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.830 20:35:57 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.830 20:35:57 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:02.830 20:35:57 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.830 20:35:57 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:03.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:03.398 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:03.398 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:03.398 20:35:58 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:03.398 20:35:58 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@233 -- # local class 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@235 -- # local progif 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@236 -- # class=01 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:08:03.398 20:35:58 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:03.398 20:35:58 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:08:03.398 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:03.399 * spdk_dd linked to liburing 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:03.399 20:35:58 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:03.400 20:35:58 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:08:03.400 20:35:58 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:03.400 20:35:58 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:03.400 20:35:58 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:03.400 20:35:58 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:03.400 20:35:58 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:03.400 20:35:58 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:03.400 20:35:58 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:03.400 20:35:58 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.400 20:35:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:03.400 ************************************ 00:08:03.400 START TEST spdk_dd_basic_rw 00:08:03.400 ************************************ 00:08:03.400 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:03.660 * Looking for test storage... 00:08:03.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.660 --rc genhtml_branch_coverage=1 00:08:03.660 --rc genhtml_function_coverage=1 00:08:03.660 --rc genhtml_legend=1 00:08:03.660 --rc geninfo_all_blocks=1 00:08:03.660 --rc geninfo_unexecuted_blocks=1 00:08:03.660 00:08:03.660 ' 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.660 --rc genhtml_branch_coverage=1 00:08:03.660 --rc genhtml_function_coverage=1 00:08:03.660 --rc genhtml_legend=1 00:08:03.660 --rc geninfo_all_blocks=1 00:08:03.660 --rc geninfo_unexecuted_blocks=1 00:08:03.660 00:08:03.660 ' 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.660 --rc genhtml_branch_coverage=1 00:08:03.660 --rc genhtml_function_coverage=1 00:08:03.660 --rc genhtml_legend=1 00:08:03.660 --rc geninfo_all_blocks=1 00:08:03.660 --rc geninfo_unexecuted_blocks=1 00:08:03.660 00:08:03.660 ' 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:03.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.660 --rc genhtml_branch_coverage=1 00:08:03.660 --rc genhtml_function_coverage=1 00:08:03.660 --rc genhtml_legend=1 00:08:03.660 --rc geninfo_all_blocks=1 00:08:03.660 --rc geninfo_unexecuted_blocks=1 00:08:03.660 00:08:03.660 ' 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:03.660 20:35:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:03.661 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:03.922 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:03.922 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.923 ************************************ 00:08:03.923 START TEST dd_bs_lt_native_bs 00:08:03.923 ************************************ 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.923 20:35:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.923 { 00:08:03.923 "subsystems": [ 00:08:03.923 { 00:08:03.923 "subsystem": "bdev", 00:08:03.923 "config": [ 00:08:03.923 { 00:08:03.923 "params": { 00:08:03.923 "trtype": "pcie", 00:08:03.923 "traddr": "0000:00:10.0", 00:08:03.923 "name": "Nvme0" 00:08:03.923 }, 00:08:03.923 "method": "bdev_nvme_attach_controller" 00:08:03.923 }, 00:08:03.923 { 00:08:03.923 "method": "bdev_wait_for_examine" 00:08:03.923 } 00:08:03.923 ] 00:08:03.923 } 00:08:03.923 ] 00:08:03.923 } 00:08:03.923 [2024-11-26 20:35:58.876269] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:03.923 [2024-11-26 20:35:58.876387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59805 ] 00:08:04.182 [2024-11-26 20:35:59.033632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.182 [2024-11-26 20:35:59.124395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.440 [2024-11-26 20:35:59.211434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.440 [2024-11-26 20:35:59.347271] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:04.440 [2024-11-26 20:35:59.347348] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.711 [2024-11-26 20:35:59.541365] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.711 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:08:04.711 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.711 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:08:04.711 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:08:04.711 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:08:04.711 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.711 00:08:04.711 real 0m0.806s 00:08:04.711 user 0m0.528s 00:08:04.711 sys 0m0.225s 00:08:04.711 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.712 ************************************ 00:08:04.712 END TEST dd_bs_lt_native_bs 00:08:04.712 ************************************ 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.712 ************************************ 00:08:04.712 START TEST dd_rw 00:08:04.712 ************************************ 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:04.712 20:35:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 20:36:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:05.662 20:36:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:05.662 20:36:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.662 20:36:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 { 00:08:05.662 "subsystems": [ 00:08:05.662 { 00:08:05.662 "subsystem": "bdev", 00:08:05.662 "config": [ 00:08:05.662 { 00:08:05.662 "params": { 00:08:05.662 "trtype": "pcie", 00:08:05.662 "traddr": "0000:00:10.0", 00:08:05.662 "name": "Nvme0" 00:08:05.662 }, 00:08:05.662 "method": "bdev_nvme_attach_controller" 00:08:05.662 }, 00:08:05.662 { 00:08:05.662 "method": "bdev_wait_for_examine" 00:08:05.662 } 00:08:05.662 ] 00:08:05.662 } 00:08:05.662 ] 00:08:05.662 } 00:08:05.662 [2024-11-26 20:36:00.347654] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:05.662 [2024-11-26 20:36:00.347760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59841 ] 00:08:05.662 [2024-11-26 20:36:00.498639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.662 [2024-11-26 20:36:00.579948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.921 [2024-11-26 20:36:00.659783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.921  [2024-11-26T20:36:01.173Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:06.180 00:08:06.180 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:06.181 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:06.181 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.181 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.181 { 00:08:06.181 "subsystems": [ 00:08:06.181 { 00:08:06.181 "subsystem": "bdev", 00:08:06.181 "config": [ 00:08:06.181 { 00:08:06.181 "params": { 00:08:06.181 "trtype": "pcie", 00:08:06.181 "traddr": "0000:00:10.0", 00:08:06.181 "name": "Nvme0" 00:08:06.181 }, 00:08:06.181 "method": "bdev_nvme_attach_controller" 00:08:06.181 }, 00:08:06.181 { 00:08:06.181 "method": "bdev_wait_for_examine" 00:08:06.181 } 00:08:06.181 ] 00:08:06.181 } 00:08:06.181 ] 00:08:06.181 } 00:08:06.181 [2024-11-26 20:36:01.128754] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:06.181 [2024-11-26 20:36:01.128871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59855 ] 00:08:06.439 [2024-11-26 20:36:01.282042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.439 [2024-11-26 20:36:01.362839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.698 [2024-11-26 20:36:01.445577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.698  [2024-11-26T20:36:01.949Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:06.956 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.956 20:36:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.956 { 00:08:06.956 "subsystems": [ 00:08:06.956 { 00:08:06.956 "subsystem": "bdev", 00:08:06.956 "config": [ 00:08:06.956 { 00:08:06.956 "params": { 00:08:06.956 "trtype": "pcie", 00:08:06.956 "traddr": "0000:00:10.0", 00:08:06.956 "name": "Nvme0" 00:08:06.956 }, 00:08:06.956 "method": "bdev_nvme_attach_controller" 00:08:06.956 }, 00:08:06.956 { 00:08:06.956 "method": "bdev_wait_for_examine" 00:08:06.956 } 00:08:06.956 ] 00:08:06.956 } 00:08:06.956 ] 00:08:06.956 } 00:08:06.956 [2024-11-26 20:36:01.916098] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:06.956 [2024-11-26 20:36:01.916378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59876 ] 00:08:07.215 [2024-11-26 20:36:02.061883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.215 [2024-11-26 20:36:02.144013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.473 [2024-11-26 20:36:02.228519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.473  [2024-11-26T20:36:02.727Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:07.734 00:08:07.734 20:36:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:07.734 20:36:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:07.734 20:36:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:07.734 20:36:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:07.734 20:36:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:07.734 20:36:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:07.734 20:36:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.302 20:36:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:08.302 20:36:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:08.302 20:36:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:08.302 20:36:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.562 [2024-11-26 20:36:03.322534] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:08.562 [2024-11-26 20:36:03.322664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59896 ] 00:08:08.562 { 00:08:08.562 "subsystems": [ 00:08:08.562 { 00:08:08.562 "subsystem": "bdev", 00:08:08.562 "config": [ 00:08:08.562 { 00:08:08.562 "params": { 00:08:08.562 "trtype": "pcie", 00:08:08.562 "traddr": "0000:00:10.0", 00:08:08.562 "name": "Nvme0" 00:08:08.562 }, 00:08:08.562 "method": "bdev_nvme_attach_controller" 00:08:08.562 }, 00:08:08.562 { 00:08:08.562 "method": "bdev_wait_for_examine" 00:08:08.562 } 00:08:08.562 ] 00:08:08.562 } 00:08:08.562 ] 00:08:08.562 } 00:08:08.562 [2024-11-26 20:36:03.472705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.562 [2024-11-26 20:36:03.552118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.822 [2024-11-26 20:36:03.635977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.822  [2024-11-26T20:36:04.074Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:09.081 00:08:09.081 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:09.081 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:09.081 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.081 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.340 { 00:08:09.340 "subsystems": [ 00:08:09.340 { 00:08:09.340 "subsystem": "bdev", 00:08:09.340 "config": [ 00:08:09.340 { 00:08:09.340 "params": { 00:08:09.340 "trtype": "pcie", 00:08:09.340 "traddr": "0000:00:10.0", 00:08:09.340 "name": "Nvme0" 00:08:09.340 }, 00:08:09.340 "method": "bdev_nvme_attach_controller" 00:08:09.340 }, 00:08:09.340 { 00:08:09.340 "method": "bdev_wait_for_examine" 00:08:09.340 } 00:08:09.340 ] 00:08:09.340 } 00:08:09.340 ] 00:08:09.340 } 00:08:09.340 [2024-11-26 20:36:04.111820] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:09.340 [2024-11-26 20:36:04.111931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:08:09.340 [2024-11-26 20:36:04.262011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.599 [2024-11-26 20:36:04.342560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.599 [2024-11-26 20:36:04.424535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.599  [2024-11-26T20:36:04.851Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:09.858 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.858 20:36:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.119 { 00:08:10.119 "subsystems": [ 00:08:10.119 { 00:08:10.119 "subsystem": "bdev", 00:08:10.119 "config": [ 00:08:10.119 { 00:08:10.119 "params": { 00:08:10.119 "trtype": "pcie", 00:08:10.119 "traddr": "0000:00:10.0", 00:08:10.119 "name": "Nvme0" 00:08:10.119 }, 00:08:10.119 "method": "bdev_nvme_attach_controller" 00:08:10.119 }, 00:08:10.119 { 00:08:10.119 "method": "bdev_wait_for_examine" 00:08:10.119 } 00:08:10.119 ] 00:08:10.119 } 00:08:10.119 ] 00:08:10.119 } 00:08:10.119 [2024-11-26 20:36:04.900352] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:10.119 [2024-11-26 20:36:04.900701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59935 ] 00:08:10.119 [2024-11-26 20:36:05.050421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.377 [2024-11-26 20:36:05.129041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.377 [2024-11-26 20:36:05.210297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.377  [2024-11-26T20:36:05.628Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:10.635 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:10.893 20:36:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.460 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:11.460 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:11.460 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.460 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.460 { 00:08:11.460 "subsystems": [ 00:08:11.460 { 00:08:11.460 "subsystem": "bdev", 00:08:11.460 "config": [ 00:08:11.460 { 00:08:11.460 "params": { 00:08:11.460 "trtype": "pcie", 00:08:11.460 "traddr": "0000:00:10.0", 00:08:11.460 "name": "Nvme0" 00:08:11.460 }, 00:08:11.460 "method": "bdev_nvme_attach_controller" 00:08:11.460 }, 00:08:11.460 { 00:08:11.460 "method": "bdev_wait_for_examine" 00:08:11.460 } 00:08:11.460 ] 00:08:11.460 } 00:08:11.460 ] 00:08:11.460 } 00:08:11.460 [2024-11-26 20:36:06.242791] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:11.460 [2024-11-26 20:36:06.243142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59954 ] 00:08:11.460 [2024-11-26 20:36:06.399688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.720 [2024-11-26 20:36:06.482535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.720 [2024-11-26 20:36:06.564553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.720  [2024-11-26T20:36:07.279Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:12.286 00:08:12.286 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:12.286 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:12.286 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.286 20:36:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.286 { 00:08:12.286 "subsystems": [ 00:08:12.286 { 00:08:12.286 "subsystem": "bdev", 00:08:12.286 "config": [ 00:08:12.286 { 00:08:12.286 "params": { 00:08:12.286 "trtype": "pcie", 00:08:12.286 "traddr": "0000:00:10.0", 00:08:12.286 "name": "Nvme0" 00:08:12.286 }, 00:08:12.286 "method": "bdev_nvme_attach_controller" 00:08:12.286 }, 00:08:12.286 { 00:08:12.286 "method": "bdev_wait_for_examine" 00:08:12.286 } 00:08:12.286 ] 00:08:12.286 } 00:08:12.286 ] 00:08:12.286 } 00:08:12.286 [2024-11-26 20:36:07.046037] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:12.286 [2024-11-26 20:36:07.046224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ] 00:08:12.286 [2024-11-26 20:36:07.199082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.543 [2024-11-26 20:36:07.278242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.543 [2024-11-26 20:36:07.358431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.543  [2024-11-26T20:36:07.858Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:12.865 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.865 20:36:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.135 { 00:08:13.135 "subsystems": [ 00:08:13.135 { 00:08:13.135 "subsystem": "bdev", 00:08:13.135 "config": [ 00:08:13.135 { 00:08:13.135 "params": { 00:08:13.135 "trtype": "pcie", 00:08:13.135 "traddr": "0000:00:10.0", 00:08:13.135 "name": "Nvme0" 00:08:13.135 }, 00:08:13.135 "method": "bdev_nvme_attach_controller" 00:08:13.135 }, 00:08:13.135 { 00:08:13.135 "method": "bdev_wait_for_examine" 00:08:13.135 } 00:08:13.135 ] 00:08:13.135 } 00:08:13.135 ] 00:08:13.135 } 00:08:13.135 [2024-11-26 20:36:07.835692] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:13.135 [2024-11-26 20:36:07.836240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:08:13.135 [2024-11-26 20:36:07.984941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.135 [2024-11-26 20:36:08.066911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.394 [2024-11-26 20:36:08.149826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.394  [2024-11-26T20:36:08.645Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:13.652 00:08:13.652 20:36:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:13.652 20:36:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:13.652 20:36:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:13.652 20:36:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:13.652 20:36:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:13.652 20:36:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:13.652 20:36:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.218 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:14.218 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:14.218 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.218 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.476 { 00:08:14.476 "subsystems": [ 00:08:14.476 { 00:08:14.476 "subsystem": "bdev", 00:08:14.476 "config": [ 00:08:14.476 { 00:08:14.476 "params": { 00:08:14.476 "trtype": "pcie", 00:08:14.476 "traddr": "0000:00:10.0", 00:08:14.476 "name": "Nvme0" 00:08:14.476 }, 00:08:14.476 "method": "bdev_nvme_attach_controller" 00:08:14.476 }, 00:08:14.476 { 00:08:14.476 "method": "bdev_wait_for_examine" 00:08:14.476 } 00:08:14.476 ] 00:08:14.476 } 00:08:14.476 ] 00:08:14.476 } 00:08:14.476 [2024-11-26 20:36:09.250422] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:14.476 [2024-11-26 20:36:09.250541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60013 ] 00:08:14.476 [2024-11-26 20:36:09.403411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.733 [2024-11-26 20:36:09.486297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.733 [2024-11-26 20:36:09.566584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.733  [2024-11-26T20:36:09.986Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:14.993 00:08:14.993 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:14.993 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:14.993 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.993 20:36:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.251 { 00:08:15.251 "subsystems": [ 00:08:15.251 { 00:08:15.251 "subsystem": "bdev", 00:08:15.251 "config": [ 00:08:15.251 { 00:08:15.251 "params": { 00:08:15.251 "trtype": "pcie", 00:08:15.251 "traddr": "0000:00:10.0", 00:08:15.251 "name": "Nvme0" 00:08:15.251 }, 00:08:15.251 "method": "bdev_nvme_attach_controller" 00:08:15.251 }, 00:08:15.251 { 00:08:15.251 "method": "bdev_wait_for_examine" 00:08:15.251 } 00:08:15.251 ] 00:08:15.251 } 00:08:15.251 ] 00:08:15.251 } 00:08:15.251 [2024-11-26 20:36:10.032810] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:15.251 [2024-11-26 20:36:10.033205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:08:15.251 [2024-11-26 20:36:10.183066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.511 [2024-11-26 20:36:10.262160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.511 [2024-11-26 20:36:10.341759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.511  [2024-11-26T20:36:10.762Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:15.769 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:15.769 20:36:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.028 { 00:08:16.028 "subsystems": [ 00:08:16.028 { 00:08:16.028 "subsystem": "bdev", 00:08:16.028 "config": [ 00:08:16.028 { 00:08:16.028 "params": { 00:08:16.028 "trtype": "pcie", 00:08:16.028 "traddr": "0000:00:10.0", 00:08:16.028 "name": "Nvme0" 00:08:16.028 }, 00:08:16.028 "method": "bdev_nvme_attach_controller" 00:08:16.028 }, 00:08:16.028 { 00:08:16.028 "method": "bdev_wait_for_examine" 00:08:16.028 } 00:08:16.028 ] 00:08:16.028 } 00:08:16.028 ] 00:08:16.028 } 00:08:16.028 [2024-11-26 20:36:10.798549] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:16.028 [2024-11-26 20:36:10.798643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60053 ] 00:08:16.028 [2024-11-26 20:36:10.941190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.287 [2024-11-26 20:36:11.021973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.287 [2024-11-26 20:36:11.103694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.287  [2024-11-26T20:36:11.539Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:16.546 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:16.546 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:17.114 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:17.115 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:17.115 20:36:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.115 { 00:08:17.115 "subsystems": [ 00:08:17.115 { 00:08:17.115 "subsystem": "bdev", 00:08:17.115 "config": [ 00:08:17.115 { 00:08:17.115 "params": { 00:08:17.115 "trtype": "pcie", 00:08:17.115 "traddr": "0000:00:10.0", 00:08:17.115 "name": "Nvme0" 00:08:17.115 }, 00:08:17.115 "method": "bdev_nvme_attach_controller" 00:08:17.115 }, 00:08:17.115 { 00:08:17.115 "method": "bdev_wait_for_examine" 00:08:17.115 } 00:08:17.115 ] 00:08:17.115 } 00:08:17.115 ] 00:08:17.115 } 00:08:17.115 [2024-11-26 20:36:12.017229] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:17.115 [2024-11-26 20:36:12.017344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60074 ] 00:08:17.374 [2024-11-26 20:36:12.170723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.374 [2024-11-26 20:36:12.252625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.374 [2024-11-26 20:36:12.333619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.631  [2024-11-26T20:36:12.883Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:17.890 00:08:17.890 20:36:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:17.890 20:36:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:17.890 20:36:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:17.890 20:36:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.890 { 00:08:17.890 "subsystems": [ 00:08:17.890 { 00:08:17.890 "subsystem": "bdev", 00:08:17.890 "config": [ 00:08:17.890 { 00:08:17.890 "params": { 00:08:17.890 "trtype": "pcie", 00:08:17.890 "traddr": "0000:00:10.0", 00:08:17.890 "name": "Nvme0" 00:08:17.890 }, 00:08:17.890 "method": "bdev_nvme_attach_controller" 00:08:17.890 }, 00:08:17.890 { 00:08:17.890 "method": "bdev_wait_for_examine" 00:08:17.890 } 00:08:17.890 ] 00:08:17.890 } 00:08:17.890 ] 00:08:17.890 } 00:08:17.890 [2024-11-26 20:36:12.800439] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:17.890 [2024-11-26 20:36:12.800560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60092 ] 00:08:18.149 [2024-11-26 20:36:12.952511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.149 [2024-11-26 20:36:13.030019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.149 [2024-11-26 20:36:13.118174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.432  [2024-11-26T20:36:13.684Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:18.691 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:18.691 20:36:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:18.691 { 00:08:18.691 "subsystems": [ 00:08:18.691 { 00:08:18.691 "subsystem": "bdev", 00:08:18.691 "config": [ 00:08:18.691 { 00:08:18.691 "params": { 00:08:18.691 "trtype": "pcie", 00:08:18.691 "traddr": "0000:00:10.0", 00:08:18.691 "name": "Nvme0" 00:08:18.691 }, 00:08:18.691 "method": "bdev_nvme_attach_controller" 00:08:18.691 }, 00:08:18.691 { 00:08:18.691 "method": "bdev_wait_for_examine" 00:08:18.691 } 00:08:18.691 ] 00:08:18.691 } 00:08:18.691 ] 00:08:18.691 } 00:08:18.691 [2024-11-26 20:36:13.603749] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:18.691 [2024-11-26 20:36:13.603866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60104 ] 00:08:18.949 [2024-11-26 20:36:13.755602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.950 [2024-11-26 20:36:13.835043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.950 [2024-11-26 20:36:13.913403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.209  [2024-11-26T20:36:14.462Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:19.469 00:08:19.469 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:19.469 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:19.469 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:19.469 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:19.469 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:19.469 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:19.469 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:20.038 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:20.038 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:20.038 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:20.038 20:36:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:20.038 { 00:08:20.038 "subsystems": [ 00:08:20.038 { 00:08:20.038 "subsystem": "bdev", 00:08:20.038 "config": [ 00:08:20.038 { 00:08:20.038 "params": { 00:08:20.038 "trtype": "pcie", 00:08:20.038 "traddr": "0000:00:10.0", 00:08:20.038 "name": "Nvme0" 00:08:20.038 }, 00:08:20.038 "method": "bdev_nvme_attach_controller" 00:08:20.038 }, 00:08:20.038 { 00:08:20.038 "method": "bdev_wait_for_examine" 00:08:20.038 } 00:08:20.038 ] 00:08:20.038 } 00:08:20.038 ] 00:08:20.038 } 00:08:20.038 [2024-11-26 20:36:14.812014] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:20.038 [2024-11-26 20:36:14.812373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60129 ] 00:08:20.038 [2024-11-26 20:36:14.963846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.296 [2024-11-26 20:36:15.041763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.296 [2024-11-26 20:36:15.120156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.296  [2024-11-26T20:36:15.547Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:20.554 00:08:20.554 20:36:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:20.554 20:36:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:20.554 20:36:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:20.554 20:36:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:20.812 { 00:08:20.812 "subsystems": [ 00:08:20.812 { 00:08:20.812 "subsystem": "bdev", 00:08:20.812 "config": [ 00:08:20.812 { 00:08:20.812 "params": { 00:08:20.812 "trtype": "pcie", 00:08:20.812 "traddr": "0000:00:10.0", 00:08:20.812 "name": "Nvme0" 00:08:20.812 }, 00:08:20.812 "method": "bdev_nvme_attach_controller" 00:08:20.812 }, 00:08:20.812 { 00:08:20.812 "method": "bdev_wait_for_examine" 00:08:20.812 } 00:08:20.812 ] 00:08:20.812 } 00:08:20.812 ] 00:08:20.812 } 00:08:20.812 [2024-11-26 20:36:15.577008] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:20.812 [2024-11-26 20:36:15.577118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60142 ] 00:08:20.812 [2024-11-26 20:36:15.729843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.071 [2024-11-26 20:36:15.811120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.071 [2024-11-26 20:36:15.893467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.071  [2024-11-26T20:36:16.322Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:21.329 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:21.329 20:36:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:21.590 { 00:08:21.590 "subsystems": [ 00:08:21.590 { 00:08:21.590 "subsystem": "bdev", 00:08:21.590 "config": [ 00:08:21.590 { 00:08:21.590 "params": { 00:08:21.590 "trtype": "pcie", 00:08:21.590 "traddr": "0000:00:10.0", 00:08:21.590 "name": "Nvme0" 00:08:21.590 }, 00:08:21.590 "method": "bdev_nvme_attach_controller" 00:08:21.590 }, 00:08:21.590 { 00:08:21.590 "method": "bdev_wait_for_examine" 00:08:21.590 } 00:08:21.590 ] 00:08:21.590 } 00:08:21.590 ] 00:08:21.590 } 00:08:21.590 [2024-11-26 20:36:16.372181] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:21.590 [2024-11-26 20:36:16.372309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:08:21.590 [2024-11-26 20:36:16.528301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.848 [2024-11-26 20:36:16.606897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.848 [2024-11-26 20:36:16.686837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.848  [2024-11-26T20:36:17.099Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:22.106 00:08:22.106 00:08:22.106 real 0m17.407s 00:08:22.106 user 0m11.947s 00:08:22.106 sys 0m8.019s 00:08:22.106 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.365 ************************************ 00:08:22.365 END TEST dd_rw 00:08:22.365 ************************************ 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:22.365 ************************************ 00:08:22.365 START TEST dd_rw_offset 00:08:22.365 ************************************ 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:22.365 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=twl4cvajn3he3rxurbqws9mymf7ihg914vi6kt61xr14p5qqrymmw8m37nj3po1ue28u6dnzxg0vn9nmmypce0nv34n9in9pybc2fkcmbfr5tvlh3hzw5gdjkods89ztlffnroc7em0y71wn82arvjhaqo9pseyol9w3tvt491n9yi2ltf47g599kuvqeaobm4x1zvbsdxk32aacqm1of7xmui68pm9abpfziveyeu76m755cvb4yl7l362t3n8n6rjxya1hf98hflp8tdhaty19emgzt33k4b9mzylnfz8anv8tlsuaz95t1h5d3bzh2xhz9jz0qeg9zc1qxsm7jf302jr9thl0n9rkyxgrzn85mc64hcsoog5o1v75e2ram4lnoriwhffr1uey6xmunn6gvpa0zd4uvrn7m3p1kor8dpv9b883bxelx1ptn4qvt6egjbsguvnvamzomz3zevt7jdx0sw0dq6eo728bvkfoymykkn28tu7o6m2odc7xttnl8h5x4pl1gw8hpltsmukeol6jejapweld4o4dtmjvz5e1bxqrpiyhqvu6ypa6m5qxftsyhgl7jpkcbrv6tapkc8kboo29eqovg8lrjxwyf6nxpqb7r2fy35f2uzuvfzw2zzzewrcym38bw1ljb1reza4s4obilqtriih2w35q5r06o8jkux9wdiqjrp8v0sb9bxcblrkcgb6neavt0ws3ucf4me25188f9m3xx1wklu8dd0gca3gya6xbkpvyf6r4ffngzx3ccj2si6zorznn0mihtkf8o3s93eympzlky7v6bdknilnanmrsa0vq8ltgpqivyrh7kjq00i00qb1weo93z97x9skexwb8xdvwgt2s2gx2m0ct8ocgaz8udoz711ypfaw4lm480wkx3k7gmzb40fc3a36vuz83pxsk0i8diayihz3w9pe0lxswq54326neaj8vdybo60nn22qvy33zymkarycb07b0mo1x1i53w0uvz04fdas8cvb2d99jn7u2q1bdzbqkbbgsui4samx38oy5id5fq4vevh008fv60s26uqyzzfh4e7c3kvegpo2yn1fr2pqa8kdmxmcrxp1fjbldrz53l08l4ajye2ghqwqocdz5wvbrk6q2msqw80dpmuaj4dg5319oqn9ffy3q16uqcgu2ra27kn3m8my2gssmlaokct0h169wylvulap8og07dse4jpmumcbv3pizkhu8kixment52pwhux9wlidv380aeaxbzi54o8hs4s1sv84nu0tfx6pdx9ns8jsu29wu06uy7wewl6p9twhc5l1hlxs6l53b6svj881n5jsfv7u8z5aeqe3r7uib6pefoa5v5namtdeaqrbk2l7tpc5j8tn77zeplsby2whu9vxbvhtk7z88sh3k5kh690nt92h68g7iea0lqxdh24tli5484lfuookzrfmj3k4tsqaz9vb7zv8l0qjjk2x7a7raysncapkjenc4zh4zy69vsw5dfj9oa9l4z2llg4sanmxn82pc8ht19h6to22i5j2l4e5520rdmg8l0lbc2eyopo4p1391pdo8gj7xhff8hrg9sdwuocww1g2snbtj8i73zb8sqot9pirdkx9p0j0j5a8u72jo3atm89b3h9s3klm4i90kt6901dg3292f9ocdjx8ccb9g581wq17ag9gp2xii16nz0x7x9yu5zp03uredldwq39k43myravdv0rra7iesp057hlij8hagznxfceiis6awtwcd6rh7ue4qq1c8465djiwdqwxh7yltqtil9pf8cbn3nxwybzqy8brhybdjri7e09ve5tch8equqvm3jl42jfogiuvrg3kw74svxfyxii5tesffr20u7rvty84tvo07sfft1nyv18cvi2salljp46trf552h3mts9zdgb37erqytdh1nf2a9s1vqbqlsdj5gpks1448g4efswklttg7m58lkpb6itorvkzqhw245zxld7m4qxcdypmw82w1t4up6gahzk2baghntko291gy83gcryz488u00itvi4xiawvt0fk96ids1d9bgtf0crp1aeak4xer74o1fqvdpcl554jqw0j5qqjaig041x430udb53n0n86hnw57s6hsiqxt95nxt3ckaufgmx0x3510lk3ai6zptalvikyhzburxb6etrovbj0idhkxgrzh98wc8n8rpioo6bx7qvzdy34mwd2dt0xsplro7s0nw2e0nif5zmou47545ma90fkvymef3suatovcxxgeo408p7rzky0ngjhsdpi4x7f4y3qfxirbhw1z2dllcvgc6ds3shikel4ww4zeqkq68109ea02x1n179498z4scl5p4mi9sm80ub2afs8m0hb1hceok61wso9rm5hicqzqtemf6g5ztn94y0g1px8zrj64s8t0l9v2y0ai110rljhcymlbz4b7d7s21vtvy25tncdk9rme1d8f88400j03w4twsdmnuyo5zcam01l64h539fbvcu5mkctwvz1gvklr51phfjwtyuq0wi94k5okbooo2pm17ijbqyzm6mp5kd2zx9jpmmhfgtj09ceharjdb7zeneqdet536lxolme3t5z1oo0rpz0sw05eiipw0o2xuhcyegm11ueeoyi3qne8owjsbb1rulsyln160phzix0nqay1lazgz2rtw5f0juztakiigpb271jm7znvk2bnbovearovsjwoe0shsangqoj81pyi7wkoshzcpd9dw4ryiv4vcpg5v4zzuep39p3321lc2fojpounj61k5recml0tbokjtcavdk8r8zhofhlm862j8jv8kc5tp7ych0048osqhhex4kii2x1338ubnewh3nc7rvry18jjw2q8vu9qtscgell4me5fqx8zweiu3sry2isjuh410f5tgzckh59uvek1sppvhgobpzmzqqcc61i7syxk1u4nshk1j6l9lqoipsw5c3783ivt26lr7vdepr8w8w2n95w2v0w0vpdjxea7f6ifx8uvht37xip26byhlibloiu2rl0bncipcklj5s61wzolxd19tx2j27nnhetox0pfnxz70taqjtb006yuz160dee1yr967f9ak59msuvs5dvicr1nnpc7xsjrj1vt5bqvh3dv7tj7vncdw3rg7bx098my2k1pdmwr11u4jydynv2hehkq7ecdde4qxyok524ncm51m6mzft6ayif5mx7bejhzc8x8gai31a2uike8wr9lqh4d0l0y41q2r9fmpgchctmvex3mirycd8wynxc8tlwlqh38mltiyam5eh61nwcnq0yz9zf5iyk0ek1wz1fuxy0xfjgb40mb12gnxeszpczoh1k12q5v07xu9hkkt3h4ryirnftexpevw6h2jdwqywgngsg2yllljbt1mccf7rssfrjbbp28wduab90meck0mcfgilekai75ghvs6f1jtka0eap8tgyikplvtt6dq7sobjrksly1z77673ryn4f5yhvhvk3x6b45ghlocav23dxhpknqbrvhv7q9zofql5awjfiknlywnesnv1mg6jgah3chta7xpf1fz8zh43ffijrvq9plasc58p2dhj00e18k8roj4i8ieiuzsfyknq3qnu9bhoz7mb4lc53atjrixxps9579qi88y001mkupadkcurbfqb3ztiao02dmbv3bnhjj04bfua3bbb9bejurxrpa9g29uwnm843jvexoqo33dxi337u1k8ir4roy4jgjkzi9m1mtr83ydumohfrsft6k7iiwu2obhfdyxaot93ut1hpg0nt9ui1bnnjv0g77azpkwzq3ka1bdpx208fo3sll6v5e1s3n3rxor9rm0ru0e71zt6rur957ptija7ymm2gxqhqmt4l9edi758v15rft5lzyiyq0wbrttbi620nz7r3v36vbq8j26itxsum7tyy6fe135wdz6ugytt4h9b1g4t8t7jv6ioy01sypxabp0shmkxu490awsjyc7a14eb0cnixdp9apy1s3xbcmbz0m7rsw 00:08:22.366 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:22.366 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:22.366 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:22.366 20:36:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:22.366 { 00:08:22.366 "subsystems": [ 00:08:22.366 { 00:08:22.366 "subsystem": "bdev", 00:08:22.366 "config": [ 00:08:22.366 { 00:08:22.366 "params": { 00:08:22.366 "trtype": "pcie", 00:08:22.366 "traddr": "0000:00:10.0", 00:08:22.366 "name": "Nvme0" 00:08:22.366 }, 00:08:22.366 "method": "bdev_nvme_attach_controller" 00:08:22.366 }, 00:08:22.366 { 00:08:22.366 "method": "bdev_wait_for_examine" 00:08:22.366 } 00:08:22.366 ] 00:08:22.366 } 00:08:22.366 ] 00:08:22.366 } 00:08:22.366 [2024-11-26 20:36:17.271050] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:22.366 [2024-11-26 20:36:17.271179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60199 ] 00:08:22.625 [2024-11-26 20:36:17.430723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.625 [2024-11-26 20:36:17.521055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.625 [2024-11-26 20:36:17.608336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.883  [2024-11-26T20:36:18.134Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:23.141 00:08:23.141 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:23.141 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:23.141 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:23.141 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:23.141 { 00:08:23.141 "subsystems": [ 00:08:23.141 { 00:08:23.141 "subsystem": "bdev", 00:08:23.141 "config": [ 00:08:23.141 { 00:08:23.141 "params": { 00:08:23.141 "trtype": "pcie", 00:08:23.141 "traddr": "0000:00:10.0", 00:08:23.141 "name": "Nvme0" 00:08:23.141 }, 00:08:23.141 "method": "bdev_nvme_attach_controller" 00:08:23.141 }, 00:08:23.141 { 00:08:23.141 "method": "bdev_wait_for_examine" 00:08:23.141 } 00:08:23.141 ] 00:08:23.141 } 00:08:23.141 ] 00:08:23.141 } 00:08:23.141 [2024-11-26 20:36:18.090419] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:23.141 [2024-11-26 20:36:18.090562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60218 ] 00:08:23.400 [2024-11-26 20:36:18.255552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.400 [2024-11-26 20:36:18.348893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.658 [2024-11-26 20:36:18.436861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.658  [2024-11-26T20:36:18.910Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:23.917 00:08:23.917 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:23.918 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ twl4cvajn3he3rxurbqws9mymf7ihg914vi6kt61xr14p5qqrymmw8m37nj3po1ue28u6dnzxg0vn9nmmypce0nv34n9in9pybc2fkcmbfr5tvlh3hzw5gdjkods89ztlffnroc7em0y71wn82arvjhaqo9pseyol9w3tvt491n9yi2ltf47g599kuvqeaobm4x1zvbsdxk32aacqm1of7xmui68pm9abpfziveyeu76m755cvb4yl7l362t3n8n6rjxya1hf98hflp8tdhaty19emgzt33k4b9mzylnfz8anv8tlsuaz95t1h5d3bzh2xhz9jz0qeg9zc1qxsm7jf302jr9thl0n9rkyxgrzn85mc64hcsoog5o1v75e2ram4lnoriwhffr1uey6xmunn6gvpa0zd4uvrn7m3p1kor8dpv9b883bxelx1ptn4qvt6egjbsguvnvamzomz3zevt7jdx0sw0dq6eo728bvkfoymykkn28tu7o6m2odc7xttnl8h5x4pl1gw8hpltsmukeol6jejapweld4o4dtmjvz5e1bxqrpiyhqvu6ypa6m5qxftsyhgl7jpkcbrv6tapkc8kboo29eqovg8lrjxwyf6nxpqb7r2fy35f2uzuvfzw2zzzewrcym38bw1ljb1reza4s4obilqtriih2w35q5r06o8jkux9wdiqjrp8v0sb9bxcblrkcgb6neavt0ws3ucf4me25188f9m3xx1wklu8dd0gca3gya6xbkpvyf6r4ffngzx3ccj2si6zorznn0mihtkf8o3s93eympzlky7v6bdknilnanmrsa0vq8ltgpqivyrh7kjq00i00qb1weo93z97x9skexwb8xdvwgt2s2gx2m0ct8ocgaz8udoz711ypfaw4lm480wkx3k7gmzb40fc3a36vuz83pxsk0i8diayihz3w9pe0lxswq54326neaj8vdybo60nn22qvy33zymkarycb07b0mo1x1i53w0uvz04fdas8cvb2d99jn7u2q1bdzbqkbbgsui4samx38oy5id5fq4vevh008fv60s26uqyzzfh4e7c3kvegpo2yn1fr2pqa8kdmxmcrxp1fjbldrz53l08l4ajye2ghqwqocdz5wvbrk6q2msqw80dpmuaj4dg5319oqn9ffy3q16uqcgu2ra27kn3m8my2gssmlaokct0h169wylvulap8og07dse4jpmumcbv3pizkhu8kixment52pwhux9wlidv380aeaxbzi54o8hs4s1sv84nu0tfx6pdx9ns8jsu29wu06uy7wewl6p9twhc5l1hlxs6l53b6svj881n5jsfv7u8z5aeqe3r7uib6pefoa5v5namtdeaqrbk2l7tpc5j8tn77zeplsby2whu9vxbvhtk7z88sh3k5kh690nt92h68g7iea0lqxdh24tli5484lfuookzrfmj3k4tsqaz9vb7zv8l0qjjk2x7a7raysncapkjenc4zh4zy69vsw5dfj9oa9l4z2llg4sanmxn82pc8ht19h6to22i5j2l4e5520rdmg8l0lbc2eyopo4p1391pdo8gj7xhff8hrg9sdwuocww1g2snbtj8i73zb8sqot9pirdkx9p0j0j5a8u72jo3atm89b3h9s3klm4i90kt6901dg3292f9ocdjx8ccb9g581wq17ag9gp2xii16nz0x7x9yu5zp03uredldwq39k43myravdv0rra7iesp057hlij8hagznxfceiis6awtwcd6rh7ue4qq1c8465djiwdqwxh7yltqtil9pf8cbn3nxwybzqy8brhybdjri7e09ve5tch8equqvm3jl42jfogiuvrg3kw74svxfyxii5tesffr20u7rvty84tvo07sfft1nyv18cvi2salljp46trf552h3mts9zdgb37erqytdh1nf2a9s1vqbqlsdj5gpks1448g4efswklttg7m58lkpb6itorvkzqhw245zxld7m4qxcdypmw82w1t4up6gahzk2baghntko291gy83gcryz488u00itvi4xiawvt0fk96ids1d9bgtf0crp1aeak4xer74o1fqvdpcl554jqw0j5qqjaig041x430udb53n0n86hnw57s6hsiqxt95nxt3ckaufgmx0x3510lk3ai6zptalvikyhzburxb6etrovbj0idhkxgrzh98wc8n8rpioo6bx7qvzdy34mwd2dt0xsplro7s0nw2e0nif5zmou47545ma90fkvymef3suatovcxxgeo408p7rzky0ngjhsdpi4x7f4y3qfxirbhw1z2dllcvgc6ds3shikel4ww4zeqkq68109ea02x1n179498z4scl5p4mi9sm80ub2afs8m0hb1hceok61wso9rm5hicqzqtemf6g5ztn94y0g1px8zrj64s8t0l9v2y0ai110rljhcymlbz4b7d7s21vtvy25tncdk9rme1d8f88400j03w4twsdmnuyo5zcam01l64h539fbvcu5mkctwvz1gvklr51phfjwtyuq0wi94k5okbooo2pm17ijbqyzm6mp5kd2zx9jpmmhfgtj09ceharjdb7zeneqdet536lxolme3t5z1oo0rpz0sw05eiipw0o2xuhcyegm11ueeoyi3qne8owjsbb1rulsyln160phzix0nqay1lazgz2rtw5f0juztakiigpb271jm7znvk2bnbovearovsjwoe0shsangqoj81pyi7wkoshzcpd9dw4ryiv4vcpg5v4zzuep39p3321lc2fojpounj61k5recml0tbokjtcavdk8r8zhofhlm862j8jv8kc5tp7ych0048osqhhex4kii2x1338ubnewh3nc7rvry18jjw2q8vu9qtscgell4me5fqx8zweiu3sry2isjuh410f5tgzckh59uvek1sppvhgobpzmzqqcc61i7syxk1u4nshk1j6l9lqoipsw5c3783ivt26lr7vdepr8w8w2n95w2v0w0vpdjxea7f6ifx8uvht37xip26byhlibloiu2rl0bncipcklj5s61wzolxd19tx2j27nnhetox0pfnxz70taqjtb006yuz160dee1yr967f9ak59msuvs5dvicr1nnpc7xsjrj1vt5bqvh3dv7tj7vncdw3rg7bx098my2k1pdmwr11u4jydynv2hehkq7ecdde4qxyok524ncm51m6mzft6ayif5mx7bejhzc8x8gai31a2uike8wr9lqh4d0l0y41q2r9fmpgchctmvex3mirycd8wynxc8tlwlqh38mltiyam5eh61nwcnq0yz9zf5iyk0ek1wz1fuxy0xfjgb40mb12gnxeszpczoh1k12q5v07xu9hkkt3h4ryirnftexpevw6h2jdwqywgngsg2yllljbt1mccf7rssfrjbbp28wduab90meck0mcfgilekai75ghvs6f1jtka0eap8tgyikplvtt6dq7sobjrksly1z77673ryn4f5yhvhvk3x6b45ghlocav23dxhpknqbrvhv7q9zofql5awjfiknlywnesnv1mg6jgah3chta7xpf1fz8zh43ffijrvq9plasc58p2dhj00e18k8roj4i8ieiuzsfyknq3qnu9bhoz7mb4lc53atjrixxps9579qi88y001mkupadkcurbfqb3ztiao02dmbv3bnhjj04bfua3bbb9bejurxrpa9g29uwnm843jvexoqo33dxi337u1k8ir4roy4jgjkzi9m1mtr83ydumohfrsft6k7iiwu2obhfdyxaot93ut1hpg0nt9ui1bnnjv0g77azpkwzq3ka1bdpx208fo3sll6v5e1s3n3rxor9rm0ru0e71zt6rur957ptija7ymm2gxqhqmt4l9edi758v15rft5lzyiyq0wbrttbi620nz7r3v36vbq8j26itxsum7tyy6fe135wdz6ugytt4h9b1g4t8t7jv6ioy01sypxabp0shmkxu490awsjyc7a14eb0cnixdp9apy1s3xbcmbz0m7rsw == \t\w\l\4\c\v\a\j\n\3\h\e\3\r\x\u\r\b\q\w\s\9\m\y\m\f\7\i\h\g\9\1\4\v\i\6\k\t\6\1\x\r\1\4\p\5\q\q\r\y\m\m\w\8\m\3\7\n\j\3\p\o\1\u\e\2\8\u\6\d\n\z\x\g\0\v\n\9\n\m\m\y\p\c\e\0\n\v\3\4\n\9\i\n\9\p\y\b\c\2\f\k\c\m\b\f\r\5\t\v\l\h\3\h\z\w\5\g\d\j\k\o\d\s\8\9\z\t\l\f\f\n\r\o\c\7\e\m\0\y\7\1\w\n\8\2\a\r\v\j\h\a\q\o\9\p\s\e\y\o\l\9\w\3\t\v\t\4\9\1\n\9\y\i\2\l\t\f\4\7\g\5\9\9\k\u\v\q\e\a\o\b\m\4\x\1\z\v\b\s\d\x\k\3\2\a\a\c\q\m\1\o\f\7\x\m\u\i\6\8\p\m\9\a\b\p\f\z\i\v\e\y\e\u\7\6\m\7\5\5\c\v\b\4\y\l\7\l\3\6\2\t\3\n\8\n\6\r\j\x\y\a\1\h\f\9\8\h\f\l\p\8\t\d\h\a\t\y\1\9\e\m\g\z\t\3\3\k\4\b\9\m\z\y\l\n\f\z\8\a\n\v\8\t\l\s\u\a\z\9\5\t\1\h\5\d\3\b\z\h\2\x\h\z\9\j\z\0\q\e\g\9\z\c\1\q\x\s\m\7\j\f\3\0\2\j\r\9\t\h\l\0\n\9\r\k\y\x\g\r\z\n\8\5\m\c\6\4\h\c\s\o\o\g\5\o\1\v\7\5\e\2\r\a\m\4\l\n\o\r\i\w\h\f\f\r\1\u\e\y\6\x\m\u\n\n\6\g\v\p\a\0\z\d\4\u\v\r\n\7\m\3\p\1\k\o\r\8\d\p\v\9\b\8\8\3\b\x\e\l\x\1\p\t\n\4\q\v\t\6\e\g\j\b\s\g\u\v\n\v\a\m\z\o\m\z\3\z\e\v\t\7\j\d\x\0\s\w\0\d\q\6\e\o\7\2\8\b\v\k\f\o\y\m\y\k\k\n\2\8\t\u\7\o\6\m\2\o\d\c\7\x\t\t\n\l\8\h\5\x\4\p\l\1\g\w\8\h\p\l\t\s\m\u\k\e\o\l\6\j\e\j\a\p\w\e\l\d\4\o\4\d\t\m\j\v\z\5\e\1\b\x\q\r\p\i\y\h\q\v\u\6\y\p\a\6\m\5\q\x\f\t\s\y\h\g\l\7\j\p\k\c\b\r\v\6\t\a\p\k\c\8\k\b\o\o\2\9\e\q\o\v\g\8\l\r\j\x\w\y\f\6\n\x\p\q\b\7\r\2\f\y\3\5\f\2\u\z\u\v\f\z\w\2\z\z\z\e\w\r\c\y\m\3\8\b\w\1\l\j\b\1\r\e\z\a\4\s\4\o\b\i\l\q\t\r\i\i\h\2\w\3\5\q\5\r\0\6\o\8\j\k\u\x\9\w\d\i\q\j\r\p\8\v\0\s\b\9\b\x\c\b\l\r\k\c\g\b\6\n\e\a\v\t\0\w\s\3\u\c\f\4\m\e\2\5\1\8\8\f\9\m\3\x\x\1\w\k\l\u\8\d\d\0\g\c\a\3\g\y\a\6\x\b\k\p\v\y\f\6\r\4\f\f\n\g\z\x\3\c\c\j\2\s\i\6\z\o\r\z\n\n\0\m\i\h\t\k\f\8\o\3\s\9\3\e\y\m\p\z\l\k\y\7\v\6\b\d\k\n\i\l\n\a\n\m\r\s\a\0\v\q\8\l\t\g\p\q\i\v\y\r\h\7\k\j\q\0\0\i\0\0\q\b\1\w\e\o\9\3\z\9\7\x\9\s\k\e\x\w\b\8\x\d\v\w\g\t\2\s\2\g\x\2\m\0\c\t\8\o\c\g\a\z\8\u\d\o\z\7\1\1\y\p\f\a\w\4\l\m\4\8\0\w\k\x\3\k\7\g\m\z\b\4\0\f\c\3\a\3\6\v\u\z\8\3\p\x\s\k\0\i\8\d\i\a\y\i\h\z\3\w\9\p\e\0\l\x\s\w\q\5\4\3\2\6\n\e\a\j\8\v\d\y\b\o\6\0\n\n\2\2\q\v\y\3\3\z\y\m\k\a\r\y\c\b\0\7\b\0\m\o\1\x\1\i\5\3\w\0\u\v\z\0\4\f\d\a\s\8\c\v\b\2\d\9\9\j\n\7\u\2\q\1\b\d\z\b\q\k\b\b\g\s\u\i\4\s\a\m\x\3\8\o\y\5\i\d\5\f\q\4\v\e\v\h\0\0\8\f\v\6\0\s\2\6\u\q\y\z\z\f\h\4\e\7\c\3\k\v\e\g\p\o\2\y\n\1\f\r\2\p\q\a\8\k\d\m\x\m\c\r\x\p\1\f\j\b\l\d\r\z\5\3\l\0\8\l\4\a\j\y\e\2\g\h\q\w\q\o\c\d\z\5\w\v\b\r\k\6\q\2\m\s\q\w\8\0\d\p\m\u\a\j\4\d\g\5\3\1\9\o\q\n\9\f\f\y\3\q\1\6\u\q\c\g\u\2\r\a\2\7\k\n\3\m\8\m\y\2\g\s\s\m\l\a\o\k\c\t\0\h\1\6\9\w\y\l\v\u\l\a\p\8\o\g\0\7\d\s\e\4\j\p\m\u\m\c\b\v\3\p\i\z\k\h\u\8\k\i\x\m\e\n\t\5\2\p\w\h\u\x\9\w\l\i\d\v\3\8\0\a\e\a\x\b\z\i\5\4\o\8\h\s\4\s\1\s\v\8\4\n\u\0\t\f\x\6\p\d\x\9\n\s\8\j\s\u\2\9\w\u\0\6\u\y\7\w\e\w\l\6\p\9\t\w\h\c\5\l\1\h\l\x\s\6\l\5\3\b\6\s\v\j\8\8\1\n\5\j\s\f\v\7\u\8\z\5\a\e\q\e\3\r\7\u\i\b\6\p\e\f\o\a\5\v\5\n\a\m\t\d\e\a\q\r\b\k\2\l\7\t\p\c\5\j\8\t\n\7\7\z\e\p\l\s\b\y\2\w\h\u\9\v\x\b\v\h\t\k\7\z\8\8\s\h\3\k\5\k\h\6\9\0\n\t\9\2\h\6\8\g\7\i\e\a\0\l\q\x\d\h\2\4\t\l\i\5\4\8\4\l\f\u\o\o\k\z\r\f\m\j\3\k\4\t\s\q\a\z\9\v\b\7\z\v\8\l\0\q\j\j\k\2\x\7\a\7\r\a\y\s\n\c\a\p\k\j\e\n\c\4\z\h\4\z\y\6\9\v\s\w\5\d\f\j\9\o\a\9\l\4\z\2\l\l\g\4\s\a\n\m\x\n\8\2\p\c\8\h\t\1\9\h\6\t\o\2\2\i\5\j\2\l\4\e\5\5\2\0\r\d\m\g\8\l\0\l\b\c\2\e\y\o\p\o\4\p\1\3\9\1\p\d\o\8\g\j\7\x\h\f\f\8\h\r\g\9\s\d\w\u\o\c\w\w\1\g\2\s\n\b\t\j\8\i\7\3\z\b\8\s\q\o\t\9\p\i\r\d\k\x\9\p\0\j\0\j\5\a\8\u\7\2\j\o\3\a\t\m\8\9\b\3\h\9\s\3\k\l\m\4\i\9\0\k\t\6\9\0\1\d\g\3\2\9\2\f\9\o\c\d\j\x\8\c\c\b\9\g\5\8\1\w\q\1\7\a\g\9\g\p\2\x\i\i\1\6\n\z\0\x\7\x\9\y\u\5\z\p\0\3\u\r\e\d\l\d\w\q\3\9\k\4\3\m\y\r\a\v\d\v\0\r\r\a\7\i\e\s\p\0\5\7\h\l\i\j\8\h\a\g\z\n\x\f\c\e\i\i\s\6\a\w\t\w\c\d\6\r\h\7\u\e\4\q\q\1\c\8\4\6\5\d\j\i\w\d\q\w\x\h\7\y\l\t\q\t\i\l\9\p\f\8\c\b\n\3\n\x\w\y\b\z\q\y\8\b\r\h\y\b\d\j\r\i\7\e\0\9\v\e\5\t\c\h\8\e\q\u\q\v\m\3\j\l\4\2\j\f\o\g\i\u\v\r\g\3\k\w\7\4\s\v\x\f\y\x\i\i\5\t\e\s\f\f\r\2\0\u\7\r\v\t\y\8\4\t\v\o\0\7\s\f\f\t\1\n\y\v\1\8\c\v\i\2\s\a\l\l\j\p\4\6\t\r\f\5\5\2\h\3\m\t\s\9\z\d\g\b\3\7\e\r\q\y\t\d\h\1\n\f\2\a\9\s\1\v\q\b\q\l\s\d\j\5\g\p\k\s\1\4\4\8\g\4\e\f\s\w\k\l\t\t\g\7\m\5\8\l\k\p\b\6\i\t\o\r\v\k\z\q\h\w\2\4\5\z\x\l\d\7\m\4\q\x\c\d\y\p\m\w\8\2\w\1\t\4\u\p\6\g\a\h\z\k\2\b\a\g\h\n\t\k\o\2\9\1\g\y\8\3\g\c\r\y\z\4\8\8\u\0\0\i\t\v\i\4\x\i\a\w\v\t\0\f\k\9\6\i\d\s\1\d\9\b\g\t\f\0\c\r\p\1\a\e\a\k\4\x\e\r\7\4\o\1\f\q\v\d\p\c\l\5\5\4\j\q\w\0\j\5\q\q\j\a\i\g\0\4\1\x\4\3\0\u\d\b\5\3\n\0\n\8\6\h\n\w\5\7\s\6\h\s\i\q\x\t\9\5\n\x\t\3\c\k\a\u\f\g\m\x\0\x\3\5\1\0\l\k\3\a\i\6\z\p\t\a\l\v\i\k\y\h\z\b\u\r\x\b\6\e\t\r\o\v\b\j\0\i\d\h\k\x\g\r\z\h\9\8\w\c\8\n\8\r\p\i\o\o\6\b\x\7\q\v\z\d\y\3\4\m\w\d\2\d\t\0\x\s\p\l\r\o\7\s\0\n\w\2\e\0\n\i\f\5\z\m\o\u\4\7\5\4\5\m\a\9\0\f\k\v\y\m\e\f\3\s\u\a\t\o\v\c\x\x\g\e\o\4\0\8\p\7\r\z\k\y\0\n\g\j\h\s\d\p\i\4\x\7\f\4\y\3\q\f\x\i\r\b\h\w\1\z\2\d\l\l\c\v\g\c\6\d\s\3\s\h\i\k\e\l\4\w\w\4\z\e\q\k\q\6\8\1\0\9\e\a\0\2\x\1\n\1\7\9\4\9\8\z\4\s\c\l\5\p\4\m\i\9\s\m\8\0\u\b\2\a\f\s\8\m\0\h\b\1\h\c\e\o\k\6\1\w\s\o\9\r\m\5\h\i\c\q\z\q\t\e\m\f\6\g\5\z\t\n\9\4\y\0\g\1\p\x\8\z\r\j\6\4\s\8\t\0\l\9\v\2\y\0\a\i\1\1\0\r\l\j\h\c\y\m\l\b\z\4\b\7\d\7\s\2\1\v\t\v\y\2\5\t\n\c\d\k\9\r\m\e\1\d\8\f\8\8\4\0\0\j\0\3\w\4\t\w\s\d\m\n\u\y\o\5\z\c\a\m\0\1\l\6\4\h\5\3\9\f\b\v\c\u\5\m\k\c\t\w\v\z\1\g\v\k\l\r\5\1\p\h\f\j\w\t\y\u\q\0\w\i\9\4\k\5\o\k\b\o\o\o\2\p\m\1\7\i\j\b\q\y\z\m\6\m\p\5\k\d\2\z\x\9\j\p\m\m\h\f\g\t\j\0\9\c\e\h\a\r\j\d\b\7\z\e\n\e\q\d\e\t\5\3\6\l\x\o\l\m\e\3\t\5\z\1\o\o\0\r\p\z\0\s\w\0\5\e\i\i\p\w\0\o\2\x\u\h\c\y\e\g\m\1\1\u\e\e\o\y\i\3\q\n\e\8\o\w\j\s\b\b\1\r\u\l\s\y\l\n\1\6\0\p\h\z\i\x\0\n\q\a\y\1\l\a\z\g\z\2\r\t\w\5\f\0\j\u\z\t\a\k\i\i\g\p\b\2\7\1\j\m\7\z\n\v\k\2\b\n\b\o\v\e\a\r\o\v\s\j\w\o\e\0\s\h\s\a\n\g\q\o\j\8\1\p\y\i\7\w\k\o\s\h\z\c\p\d\9\d\w\4\r\y\i\v\4\v\c\p\g\5\v\4\z\z\u\e\p\3\9\p\3\3\2\1\l\c\2\f\o\j\p\o\u\n\j\6\1\k\5\r\e\c\m\l\0\t\b\o\k\j\t\c\a\v\d\k\8\r\8\z\h\o\f\h\l\m\8\6\2\j\8\j\v\8\k\c\5\t\p\7\y\c\h\0\0\4\8\o\s\q\h\h\e\x\4\k\i\i\2\x\1\3\3\8\u\b\n\e\w\h\3\n\c\7\r\v\r\y\1\8\j\j\w\2\q\8\v\u\9\q\t\s\c\g\e\l\l\4\m\e\5\f\q\x\8\z\w\e\i\u\3\s\r\y\2\i\s\j\u\h\4\1\0\f\5\t\g\z\c\k\h\5\9\u\v\e\k\1\s\p\p\v\h\g\o\b\p\z\m\z\q\q\c\c\6\1\i\7\s\y\x\k\1\u\4\n\s\h\k\1\j\6\l\9\l\q\o\i\p\s\w\5\c\3\7\8\3\i\v\t\2\6\l\r\7\v\d\e\p\r\8\w\8\w\2\n\9\5\w\2\v\0\w\0\v\p\d\j\x\e\a\7\f\6\i\f\x\8\u\v\h\t\3\7\x\i\p\2\6\b\y\h\l\i\b\l\o\i\u\2\r\l\0\b\n\c\i\p\c\k\l\j\5\s\6\1\w\z\o\l\x\d\1\9\t\x\2\j\2\7\n\n\h\e\t\o\x\0\p\f\n\x\z\7\0\t\a\q\j\t\b\0\0\6\y\u\z\1\6\0\d\e\e\1\y\r\9\6\7\f\9\a\k\5\9\m\s\u\v\s\5\d\v\i\c\r\1\n\n\p\c\7\x\s\j\r\j\1\v\t\5\b\q\v\h\3\d\v\7\t\j\7\v\n\c\d\w\3\r\g\7\b\x\0\9\8\m\y\2\k\1\p\d\m\w\r\1\1\u\4\j\y\d\y\n\v\2\h\e\h\k\q\7\e\c\d\d\e\4\q\x\y\o\k\5\2\4\n\c\m\5\1\m\6\m\z\f\t\6\a\y\i\f\5\m\x\7\b\e\j\h\z\c\8\x\8\g\a\i\3\1\a\2\u\i\k\e\8\w\r\9\l\q\h\4\d\0\l\0\y\4\1\q\2\r\9\f\m\p\g\c\h\c\t\m\v\e\x\3\m\i\r\y\c\d\8\w\y\n\x\c\8\t\l\w\l\q\h\3\8\m\l\t\i\y\a\m\5\e\h\6\1\n\w\c\n\q\0\y\z\9\z\f\5\i\y\k\0\e\k\1\w\z\1\f\u\x\y\0\x\f\j\g\b\4\0\m\b\1\2\g\n\x\e\s\z\p\c\z\o\h\1\k\1\2\q\5\v\0\7\x\u\9\h\k\k\t\3\h\4\r\y\i\r\n\f\t\e\x\p\e\v\w\6\h\2\j\d\w\q\y\w\g\n\g\s\g\2\y\l\l\l\j\b\t\1\m\c\c\f\7\r\s\s\f\r\j\b\b\p\2\8\w\d\u\a\b\9\0\m\e\c\k\0\m\c\f\g\i\l\e\k\a\i\7\5\g\h\v\s\6\f\1\j\t\k\a\0\e\a\p\8\t\g\y\i\k\p\l\v\t\t\6\d\q\7\s\o\b\j\r\k\s\l\y\1\z\7\7\6\7\3\r\y\n\4\f\5\y\h\v\h\v\k\3\x\6\b\4\5\g\h\l\o\c\a\v\2\3\d\x\h\p\k\n\q\b\r\v\h\v\7\q\9\z\o\f\q\l\5\a\w\j\f\i\k\n\l\y\w\n\e\s\n\v\1\m\g\6\j\g\a\h\3\c\h\t\a\7\x\p\f\1\f\z\8\z\h\4\3\f\f\i\j\r\v\q\9\p\l\a\s\c\5\8\p\2\d\h\j\0\0\e\1\8\k\8\r\o\j\4\i\8\i\e\i\u\z\s\f\y\k\n\q\3\q\n\u\9\b\h\o\z\7\m\b\4\l\c\5\3\a\t\j\r\i\x\x\p\s\9\5\7\9\q\i\8\8\y\0\0\1\m\k\u\p\a\d\k\c\u\r\b\f\q\b\3\z\t\i\a\o\0\2\d\m\b\v\3\b\n\h\j\j\0\4\b\f\u\a\3\b\b\b\9\b\e\j\u\r\x\r\p\a\9\g\2\9\u\w\n\m\8\4\3\j\v\e\x\o\q\o\3\3\d\x\i\3\3\7\u\1\k\8\i\r\4\r\o\y\4\j\g\j\k\z\i\9\m\1\m\t\r\8\3\y\d\u\m\o\h\f\r\s\f\t\6\k\7\i\i\w\u\2\o\b\h\f\d\y\x\a\o\t\9\3\u\t\1\h\p\g\0\n\t\9\u\i\1\b\n\n\j\v\0\g\7\7\a\z\p\k\w\z\q\3\k\a\1\b\d\p\x\2\0\8\f\o\3\s\l\l\6\v\5\e\1\s\3\n\3\r\x\o\r\9\r\m\0\r\u\0\e\7\1\z\t\6\r\u\r\9\5\7\p\t\i\j\a\7\y\m\m\2\g\x\q\h\q\m\t\4\l\9\e\d\i\7\5\8\v\1\5\r\f\t\5\l\z\y\i\y\q\0\w\b\r\t\t\b\i\6\2\0\n\z\7\r\3\v\3\6\v\b\q\8\j\2\6\i\t\x\s\u\m\7\t\y\y\6\f\e\1\3\5\w\d\z\6\u\g\y\t\t\4\h\9\b\1\g\4\t\8\t\7\j\v\6\i\o\y\0\1\s\y\p\x\a\b\p\0\s\h\m\k\x\u\4\9\0\a\w\s\j\y\c\7\a\1\4\e\b\0\c\n\i\x\d\p\9\a\p\y\1\s\3\x\b\c\m\b\z\0\m\7\r\s\w ]] 00:08:23.918 00:08:23.918 real 0m1.696s 00:08:23.918 user 0m1.115s 00:08:23.918 sys 0m0.899s 00:08:23.918 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.918 ************************************ 00:08:23.918 END TEST dd_rw_offset 00:08:23.918 ************************************ 00:08:23.918 20:36:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.176 20:36:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.176 { 00:08:24.176 "subsystems": [ 00:08:24.176 { 00:08:24.176 "subsystem": "bdev", 00:08:24.176 "config": [ 00:08:24.176 { 00:08:24.176 "params": { 00:08:24.176 "trtype": "pcie", 00:08:24.176 "traddr": "0000:00:10.0", 00:08:24.176 "name": "Nvme0" 00:08:24.176 }, 00:08:24.176 "method": "bdev_nvme_attach_controller" 00:08:24.176 }, 00:08:24.176 { 00:08:24.176 "method": "bdev_wait_for_examine" 00:08:24.176 } 00:08:24.176 ] 00:08:24.176 } 00:08:24.176 ] 00:08:24.176 } 00:08:24.176 [2024-11-26 20:36:18.957473] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:24.176 [2024-11-26 20:36:18.957693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60250 ] 00:08:24.176 [2024-11-26 20:36:19.101060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.434 [2024-11-26 20:36:19.183090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.434 [2024-11-26 20:36:19.269291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.434  [2024-11-26T20:36:19.685Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:24.692 00:08:24.951 20:36:19 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.951 ************************************ 00:08:24.951 END TEST spdk_dd_basic_rw 00:08:24.951 ************************************ 00:08:24.951 00:08:24.951 real 0m21.353s 00:08:24.951 user 0m14.372s 00:08:24.951 sys 0m9.849s 00:08:24.951 20:36:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.951 20:36:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.951 20:36:19 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:24.951 20:36:19 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.951 20:36:19 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.951 20:36:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:24.951 ************************************ 00:08:24.951 START TEST spdk_dd_posix 00:08:24.951 ************************************ 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:24.951 * Looking for test storage... 00:08:24.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:24.951 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.210 --rc genhtml_branch_coverage=1 00:08:25.210 --rc genhtml_function_coverage=1 00:08:25.210 --rc genhtml_legend=1 00:08:25.210 --rc geninfo_all_blocks=1 00:08:25.210 --rc geninfo_unexecuted_blocks=1 00:08:25.210 00:08:25.210 ' 00:08:25.210 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.210 --rc genhtml_branch_coverage=1 00:08:25.211 --rc genhtml_function_coverage=1 00:08:25.211 --rc genhtml_legend=1 00:08:25.211 --rc geninfo_all_blocks=1 00:08:25.211 --rc geninfo_unexecuted_blocks=1 00:08:25.211 00:08:25.211 ' 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.211 --rc genhtml_branch_coverage=1 00:08:25.211 --rc genhtml_function_coverage=1 00:08:25.211 --rc genhtml_legend=1 00:08:25.211 --rc geninfo_all_blocks=1 00:08:25.211 --rc geninfo_unexecuted_blocks=1 00:08:25.211 00:08:25.211 ' 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.211 --rc genhtml_branch_coverage=1 00:08:25.211 --rc genhtml_function_coverage=1 00:08:25.211 --rc genhtml_legend=1 00:08:25.211 --rc geninfo_all_blocks=1 00:08:25.211 --rc geninfo_unexecuted_blocks=1 00:08:25.211 00:08:25.211 ' 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:25.211 * First test run, liburing in use 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:25.211 ************************************ 00:08:25.211 START TEST dd_flag_append 00:08:25.211 ************************************ 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=nkhdyw5r815fvufq2v9j044z8s6dct3m 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=0yzflgx97rgb1uce9g2imfu9fcan161t 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s nkhdyw5r815fvufq2v9j044z8s6dct3m 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 0yzflgx97rgb1uce9g2imfu9fcan161t 00:08:25.211 20:36:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:25.211 [2024-11-26 20:36:20.033129] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:25.211 [2024-11-26 20:36:20.033510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60322 ] 00:08:25.211 [2024-11-26 20:36:20.193303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.469 [2024-11-26 20:36:20.283981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.469 [2024-11-26 20:36:20.371284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.469  [2024-11-26T20:36:20.719Z] Copying: 32/32 [B] (average 31 kBps) 00:08:25.726 00:08:25.726 ************************************ 00:08:25.726 END TEST dd_flag_append 00:08:25.726 ************************************ 00:08:25.726 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 0yzflgx97rgb1uce9g2imfu9fcan161tnkhdyw5r815fvufq2v9j044z8s6dct3m == \0\y\z\f\l\g\x\9\7\r\g\b\1\u\c\e\9\g\2\i\m\f\u\9\f\c\a\n\1\6\1\t\n\k\h\d\y\w\5\r\8\1\5\f\v\u\f\q\2\v\9\j\0\4\4\z\8\s\6\d\c\t\3\m ]] 00:08:25.726 00:08:25.726 real 0m0.744s 00:08:25.726 user 0m0.425s 00:08:25.726 sys 0m0.411s 00:08:25.726 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.727 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:25.984 ************************************ 00:08:25.984 START TEST dd_flag_directory 00:08:25.984 ************************************ 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.984 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.985 20:36:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.985 [2024-11-26 20:36:20.856704] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:25.985 [2024-11-26 20:36:20.856801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60350 ] 00:08:26.243 [2024-11-26 20:36:21.006699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.243 [2024-11-26 20:36:21.094889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.243 [2024-11-26 20:36:21.182346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.501 [2024-11-26 20:36:21.242040] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:26.501 [2024-11-26 20:36:21.242409] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:26.501 [2024-11-26 20:36:21.242446] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.501 [2024-11-26 20:36:21.429506] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.809 20:36:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:26.809 [2024-11-26 20:36:21.583602] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:26.809 [2024-11-26 20:36:21.583954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:08:26.809 [2024-11-26 20:36:21.741502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.067 [2024-11-26 20:36:21.847328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.067 [2024-11-26 20:36:21.932641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.067 [2024-11-26 20:36:21.988883] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:27.067 [2024-11-26 20:36:21.988939] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:27.067 [2024-11-26 20:36:21.988956] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.325 [2024-11-26 20:36:22.176564] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:27.325 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:27.325 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.325 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:27.325 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:27.325 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:27.325 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.325 00:08:27.325 real 0m1.490s 00:08:27.325 user 0m0.854s 00:08:27.326 sys 0m0.418s 00:08:27.326 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.326 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 ************************************ 00:08:27.326 END TEST dd_flag_directory 00:08:27.326 ************************************ 00:08:27.326 20:36:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:27.326 20:36:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.326 20:36:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.326 20:36:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:27.583 ************************************ 00:08:27.583 START TEST dd_flag_nofollow 00:08:27.583 ************************************ 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:27.583 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.584 20:36:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.584 [2024-11-26 20:36:22.384712] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:27.584 [2024-11-26 20:36:22.385021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60399 ] 00:08:27.584 [2024-11-26 20:36:22.526212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.841 [2024-11-26 20:36:22.605078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.841 [2024-11-26 20:36:22.684317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.841 [2024-11-26 20:36:22.737684] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:27.841 [2024-11-26 20:36:22.737738] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:27.841 [2024-11-26 20:36:22.737757] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.099 [2024-11-26 20:36:22.922921] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.099 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:28.099 [2024-11-26 20:36:23.070109] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:28.099 [2024-11-26 20:36:23.070540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60409 ] 00:08:28.357 [2024-11-26 20:36:23.221690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.357 [2024-11-26 20:36:23.302637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.616 [2024-11-26 20:36:23.382501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.616 [2024-11-26 20:36:23.441174] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:28.616 [2024-11-26 20:36:23.441237] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:28.616 [2024-11-26 20:36:23.441257] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.874 [2024-11-26 20:36:23.624189] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.874 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:28.874 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.874 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:28.874 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:28.875 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:28.875 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.875 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:28.875 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:28.875 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:28.875 20:36:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.875 [2024-11-26 20:36:23.754471] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:28.875 [2024-11-26 20:36:23.754568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60416 ] 00:08:29.133 [2024-11-26 20:36:23.894668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.133 [2024-11-26 20:36:23.974680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.133 [2024-11-26 20:36:24.055774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.133  [2024-11-26T20:36:24.384Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.391 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ qbcs5qmg2ps0n0avm89o3ha40n3dslnchiipnwtng5894dz957qm20xulznblrpjr322pehvjfn96raz9szbgk97mw5lovznrkbkt75t1bc58z7itj1e5l0kd3tbkv30pgbf9lrj9wu0yb7a5do4frhwtof90gbmpbv7d1yjzlbjhgmw9y1n63qnefes3nwdhbyu9ujuq8h4gb12qftp35wu592ogr1gqlkmaos75klk04qm9hb70cd9gntc6fltgj1oihe2ca4o0313x5xn3q5txs3cpgemyi2b5yzlnoofx64j331q57z91cw3k0l1onjqjbimtz6mv1nk69zzuusgho2jet17ykh1x729q7th3ummnud9t4tpjhpapjmfbvhoqrtje4mw8pj5ljwynugqvf5cgwvy9a2y6k36m10et0k0xdiak6so3wpy7gzx5yy7yp91z3s74oqmh2d7ujgni8k6yj4nupvta4hdqstzbdapxjsnkb6e1afbxf68 == \q\b\c\s\5\q\m\g\2\p\s\0\n\0\a\v\m\8\9\o\3\h\a\4\0\n\3\d\s\l\n\c\h\i\i\p\n\w\t\n\g\5\8\9\4\d\z\9\5\7\q\m\2\0\x\u\l\z\n\b\l\r\p\j\r\3\2\2\p\e\h\v\j\f\n\9\6\r\a\z\9\s\z\b\g\k\9\7\m\w\5\l\o\v\z\n\r\k\b\k\t\7\5\t\1\b\c\5\8\z\7\i\t\j\1\e\5\l\0\k\d\3\t\b\k\v\3\0\p\g\b\f\9\l\r\j\9\w\u\0\y\b\7\a\5\d\o\4\f\r\h\w\t\o\f\9\0\g\b\m\p\b\v\7\d\1\y\j\z\l\b\j\h\g\m\w\9\y\1\n\6\3\q\n\e\f\e\s\3\n\w\d\h\b\y\u\9\u\j\u\q\8\h\4\g\b\1\2\q\f\t\p\3\5\w\u\5\9\2\o\g\r\1\g\q\l\k\m\a\o\s\7\5\k\l\k\0\4\q\m\9\h\b\7\0\c\d\9\g\n\t\c\6\f\l\t\g\j\1\o\i\h\e\2\c\a\4\o\0\3\1\3\x\5\x\n\3\q\5\t\x\s\3\c\p\g\e\m\y\i\2\b\5\y\z\l\n\o\o\f\x\6\4\j\3\3\1\q\5\7\z\9\1\c\w\3\k\0\l\1\o\n\j\q\j\b\i\m\t\z\6\m\v\1\n\k\6\9\z\z\u\u\s\g\h\o\2\j\e\t\1\7\y\k\h\1\x\7\2\9\q\7\t\h\3\u\m\m\n\u\d\9\t\4\t\p\j\h\p\a\p\j\m\f\b\v\h\o\q\r\t\j\e\4\m\w\8\p\j\5\l\j\w\y\n\u\g\q\v\f\5\c\g\w\v\y\9\a\2\y\6\k\3\6\m\1\0\e\t\0\k\0\x\d\i\a\k\6\s\o\3\w\p\y\7\g\z\x\5\y\y\7\y\p\9\1\z\3\s\7\4\o\q\m\h\2\d\7\u\j\g\n\i\8\k\6\y\j\4\n\u\p\v\t\a\4\h\d\q\s\t\z\b\d\a\p\x\j\s\n\k\b\6\e\1\a\f\b\x\f\6\8 ]] 00:08:29.650 00:08:29.650 real 0m2.064s 00:08:29.650 user 0m1.152s 00:08:29.650 sys 0m0.786s 00:08:29.650 ************************************ 00:08:29.650 END TEST dd_flag_nofollow 00:08:29.650 ************************************ 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:29.650 ************************************ 00:08:29.650 START TEST dd_flag_noatime 00:08:29.650 ************************************ 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732653384 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732653384 00:08:29.650 20:36:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:30.597 20:36:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.597 [2024-11-26 20:36:25.534737] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:30.597 [2024-11-26 20:36:25.534851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:08:30.855 [2024-11-26 20:36:25.692908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.855 [2024-11-26 20:36:25.788106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.113 [2024-11-26 20:36:25.874027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.113  [2024-11-26T20:36:26.365Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.372 00:08:31.372 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.373 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732653384 )) 00:08:31.373 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.373 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732653384 )) 00:08:31.373 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.373 [2024-11-26 20:36:26.273627] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:31.373 [2024-11-26 20:36:26.273732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:08:31.631 [2024-11-26 20:36:26.418843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.631 [2024-11-26 20:36:26.502533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.631 [2024-11-26 20:36:26.587282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.890  [2024-11-26T20:36:27.141Z] Copying: 512/512 [B] (average 500 kBps) 00:08:32.148 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732653386 )) 00:08:32.148 00:08:32.148 real 0m2.473s 00:08:32.148 user 0m0.820s 00:08:32.148 sys 0m0.844s 00:08:32.148 ************************************ 00:08:32.148 END TEST dd_flag_noatime 00:08:32.148 ************************************ 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:32.148 ************************************ 00:08:32.148 START TEST dd_flags_misc 00:08:32.148 ************************************ 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:32.148 20:36:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:32.148 20:36:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.148 20:36:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:32.148 [2024-11-26 20:36:27.047663] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:32.148 [2024-11-26 20:36:27.047969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60506 ] 00:08:32.407 [2024-11-26 20:36:27.193291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.407 [2024-11-26 20:36:27.273281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.407 [2024-11-26 20:36:27.355469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.664  [2024-11-26T20:36:27.922Z] Copying: 512/512 [B] (average 500 kBps) 00:08:32.929 00:08:32.929 20:36:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1nnermxsxqwojnod13pjm6216u7el5gdo3fepasl4lps7egqeeczh7z5dzaqww67v69b7dnbjzwiiz6euxhuqta38xwt0c0yp9946b0a2xvzut15hz8h6o7cu5z9ajjkra3z5kutli37tl63rf2dmqqtp32ynsjytdkrnuezd1xwirh19khfn57g47t64re6wgx35fybwoan14mp07d4aumenp4t7z6gfw2c46v6qicq1ehah6mfv5j8bez3lba3tfs38l3z6zuflghxwvdwuvfsv3jn3r2x9aht1bb77s3d83yvua4hn8kqqm2x4ea8sdh8wzrvshbe3zk6i5vy74xkjrbcc1khapomf75smu5gry2har73u86d6btax9uv28ku2p3no0uzoxpjqvuz7epqla2phh1alc9rgstsy4nx9i6ajz8pp40vec25ro4jpj3fu7o4qux252wtmpnjvkwb8dd92wkbvn2uj0l4di3k56zwjketpbq6glohvxax == \1\n\n\e\r\m\x\s\x\q\w\o\j\n\o\d\1\3\p\j\m\6\2\1\6\u\7\e\l\5\g\d\o\3\f\e\p\a\s\l\4\l\p\s\7\e\g\q\e\e\c\z\h\7\z\5\d\z\a\q\w\w\6\7\v\6\9\b\7\d\n\b\j\z\w\i\i\z\6\e\u\x\h\u\q\t\a\3\8\x\w\t\0\c\0\y\p\9\9\4\6\b\0\a\2\x\v\z\u\t\1\5\h\z\8\h\6\o\7\c\u\5\z\9\a\j\j\k\r\a\3\z\5\k\u\t\l\i\3\7\t\l\6\3\r\f\2\d\m\q\q\t\p\3\2\y\n\s\j\y\t\d\k\r\n\u\e\z\d\1\x\w\i\r\h\1\9\k\h\f\n\5\7\g\4\7\t\6\4\r\e\6\w\g\x\3\5\f\y\b\w\o\a\n\1\4\m\p\0\7\d\4\a\u\m\e\n\p\4\t\7\z\6\g\f\w\2\c\4\6\v\6\q\i\c\q\1\e\h\a\h\6\m\f\v\5\j\8\b\e\z\3\l\b\a\3\t\f\s\3\8\l\3\z\6\z\u\f\l\g\h\x\w\v\d\w\u\v\f\s\v\3\j\n\3\r\2\x\9\a\h\t\1\b\b\7\7\s\3\d\8\3\y\v\u\a\4\h\n\8\k\q\q\m\2\x\4\e\a\8\s\d\h\8\w\z\r\v\s\h\b\e\3\z\k\6\i\5\v\y\7\4\x\k\j\r\b\c\c\1\k\h\a\p\o\m\f\7\5\s\m\u\5\g\r\y\2\h\a\r\7\3\u\8\6\d\6\b\t\a\x\9\u\v\2\8\k\u\2\p\3\n\o\0\u\z\o\x\p\j\q\v\u\z\7\e\p\q\l\a\2\p\h\h\1\a\l\c\9\r\g\s\t\s\y\4\n\x\9\i\6\a\j\z\8\p\p\4\0\v\e\c\2\5\r\o\4\j\p\j\3\f\u\7\o\4\q\u\x\2\5\2\w\t\m\p\n\j\v\k\w\b\8\d\d\9\2\w\k\b\v\n\2\u\j\0\l\4\d\i\3\k\5\6\z\w\j\k\e\t\p\b\q\6\g\l\o\h\v\x\a\x ]] 00:08:32.929 20:36:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.929 20:36:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:32.929 [2024-11-26 20:36:27.764012] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:32.929 [2024-11-26 20:36:27.764135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60521 ] 00:08:32.929 [2024-11-26 20:36:27.917755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.188 [2024-11-26 20:36:28.003524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.188 [2024-11-26 20:36:28.089509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.188  [2024-11-26T20:36:28.440Z] Copying: 512/512 [B] (average 500 kBps) 00:08:33.447 00:08:33.705 20:36:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1nnermxsxqwojnod13pjm6216u7el5gdo3fepasl4lps7egqeeczh7z5dzaqww67v69b7dnbjzwiiz6euxhuqta38xwt0c0yp9946b0a2xvzut15hz8h6o7cu5z9ajjkra3z5kutli37tl63rf2dmqqtp32ynsjytdkrnuezd1xwirh19khfn57g47t64re6wgx35fybwoan14mp07d4aumenp4t7z6gfw2c46v6qicq1ehah6mfv5j8bez3lba3tfs38l3z6zuflghxwvdwuvfsv3jn3r2x9aht1bb77s3d83yvua4hn8kqqm2x4ea8sdh8wzrvshbe3zk6i5vy74xkjrbcc1khapomf75smu5gry2har73u86d6btax9uv28ku2p3no0uzoxpjqvuz7epqla2phh1alc9rgstsy4nx9i6ajz8pp40vec25ro4jpj3fu7o4qux252wtmpnjvkwb8dd92wkbvn2uj0l4di3k56zwjketpbq6glohvxax == \1\n\n\e\r\m\x\s\x\q\w\o\j\n\o\d\1\3\p\j\m\6\2\1\6\u\7\e\l\5\g\d\o\3\f\e\p\a\s\l\4\l\p\s\7\e\g\q\e\e\c\z\h\7\z\5\d\z\a\q\w\w\6\7\v\6\9\b\7\d\n\b\j\z\w\i\i\z\6\e\u\x\h\u\q\t\a\3\8\x\w\t\0\c\0\y\p\9\9\4\6\b\0\a\2\x\v\z\u\t\1\5\h\z\8\h\6\o\7\c\u\5\z\9\a\j\j\k\r\a\3\z\5\k\u\t\l\i\3\7\t\l\6\3\r\f\2\d\m\q\q\t\p\3\2\y\n\s\j\y\t\d\k\r\n\u\e\z\d\1\x\w\i\r\h\1\9\k\h\f\n\5\7\g\4\7\t\6\4\r\e\6\w\g\x\3\5\f\y\b\w\o\a\n\1\4\m\p\0\7\d\4\a\u\m\e\n\p\4\t\7\z\6\g\f\w\2\c\4\6\v\6\q\i\c\q\1\e\h\a\h\6\m\f\v\5\j\8\b\e\z\3\l\b\a\3\t\f\s\3\8\l\3\z\6\z\u\f\l\g\h\x\w\v\d\w\u\v\f\s\v\3\j\n\3\r\2\x\9\a\h\t\1\b\b\7\7\s\3\d\8\3\y\v\u\a\4\h\n\8\k\q\q\m\2\x\4\e\a\8\s\d\h\8\w\z\r\v\s\h\b\e\3\z\k\6\i\5\v\y\7\4\x\k\j\r\b\c\c\1\k\h\a\p\o\m\f\7\5\s\m\u\5\g\r\y\2\h\a\r\7\3\u\8\6\d\6\b\t\a\x\9\u\v\2\8\k\u\2\p\3\n\o\0\u\z\o\x\p\j\q\v\u\z\7\e\p\q\l\a\2\p\h\h\1\a\l\c\9\r\g\s\t\s\y\4\n\x\9\i\6\a\j\z\8\p\p\4\0\v\e\c\2\5\r\o\4\j\p\j\3\f\u\7\o\4\q\u\x\2\5\2\w\t\m\p\n\j\v\k\w\b\8\d\d\9\2\w\k\b\v\n\2\u\j\0\l\4\d\i\3\k\5\6\z\w\j\k\e\t\p\b\q\6\g\l\o\h\v\x\a\x ]] 00:08:33.705 20:36:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.705 20:36:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:33.705 [2024-11-26 20:36:28.494224] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:33.706 [2024-11-26 20:36:28.494392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60531 ] 00:08:33.706 [2024-11-26 20:36:28.655454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.964 [2024-11-26 20:36:28.761664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.964 [2024-11-26 20:36:28.860682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.964  [2024-11-26T20:36:29.524Z] Copying: 512/512 [B] (average 250 kBps) 00:08:34.531 00:08:34.532 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1nnermxsxqwojnod13pjm6216u7el5gdo3fepasl4lps7egqeeczh7z5dzaqww67v69b7dnbjzwiiz6euxhuqta38xwt0c0yp9946b0a2xvzut15hz8h6o7cu5z9ajjkra3z5kutli37tl63rf2dmqqtp32ynsjytdkrnuezd1xwirh19khfn57g47t64re6wgx35fybwoan14mp07d4aumenp4t7z6gfw2c46v6qicq1ehah6mfv5j8bez3lba3tfs38l3z6zuflghxwvdwuvfsv3jn3r2x9aht1bb77s3d83yvua4hn8kqqm2x4ea8sdh8wzrvshbe3zk6i5vy74xkjrbcc1khapomf75smu5gry2har73u86d6btax9uv28ku2p3no0uzoxpjqvuz7epqla2phh1alc9rgstsy4nx9i6ajz8pp40vec25ro4jpj3fu7o4qux252wtmpnjvkwb8dd92wkbvn2uj0l4di3k56zwjketpbq6glohvxax == \1\n\n\e\r\m\x\s\x\q\w\o\j\n\o\d\1\3\p\j\m\6\2\1\6\u\7\e\l\5\g\d\o\3\f\e\p\a\s\l\4\l\p\s\7\e\g\q\e\e\c\z\h\7\z\5\d\z\a\q\w\w\6\7\v\6\9\b\7\d\n\b\j\z\w\i\i\z\6\e\u\x\h\u\q\t\a\3\8\x\w\t\0\c\0\y\p\9\9\4\6\b\0\a\2\x\v\z\u\t\1\5\h\z\8\h\6\o\7\c\u\5\z\9\a\j\j\k\r\a\3\z\5\k\u\t\l\i\3\7\t\l\6\3\r\f\2\d\m\q\q\t\p\3\2\y\n\s\j\y\t\d\k\r\n\u\e\z\d\1\x\w\i\r\h\1\9\k\h\f\n\5\7\g\4\7\t\6\4\r\e\6\w\g\x\3\5\f\y\b\w\o\a\n\1\4\m\p\0\7\d\4\a\u\m\e\n\p\4\t\7\z\6\g\f\w\2\c\4\6\v\6\q\i\c\q\1\e\h\a\h\6\m\f\v\5\j\8\b\e\z\3\l\b\a\3\t\f\s\3\8\l\3\z\6\z\u\f\l\g\h\x\w\v\d\w\u\v\f\s\v\3\j\n\3\r\2\x\9\a\h\t\1\b\b\7\7\s\3\d\8\3\y\v\u\a\4\h\n\8\k\q\q\m\2\x\4\e\a\8\s\d\h\8\w\z\r\v\s\h\b\e\3\z\k\6\i\5\v\y\7\4\x\k\j\r\b\c\c\1\k\h\a\p\o\m\f\7\5\s\m\u\5\g\r\y\2\h\a\r\7\3\u\8\6\d\6\b\t\a\x\9\u\v\2\8\k\u\2\p\3\n\o\0\u\z\o\x\p\j\q\v\u\z\7\e\p\q\l\a\2\p\h\h\1\a\l\c\9\r\g\s\t\s\y\4\n\x\9\i\6\a\j\z\8\p\p\4\0\v\e\c\2\5\r\o\4\j\p\j\3\f\u\7\o\4\q\u\x\2\5\2\w\t\m\p\n\j\v\k\w\b\8\d\d\9\2\w\k\b\v\n\2\u\j\0\l\4\d\i\3\k\5\6\z\w\j\k\e\t\p\b\q\6\g\l\o\h\v\x\a\x ]] 00:08:34.532 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:34.532 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:34.532 [2024-11-26 20:36:29.290305] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:34.532 [2024-11-26 20:36:29.290425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60546 ] 00:08:34.532 [2024-11-26 20:36:29.439779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.532 [2024-11-26 20:36:29.520233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.791 [2024-11-26 20:36:29.602151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.791  [2024-11-26T20:36:30.043Z] Copying: 512/512 [B] (average 13 kBps) 00:08:35.050 00:08:35.050 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1nnermxsxqwojnod13pjm6216u7el5gdo3fepasl4lps7egqeeczh7z5dzaqww67v69b7dnbjzwiiz6euxhuqta38xwt0c0yp9946b0a2xvzut15hz8h6o7cu5z9ajjkra3z5kutli37tl63rf2dmqqtp32ynsjytdkrnuezd1xwirh19khfn57g47t64re6wgx35fybwoan14mp07d4aumenp4t7z6gfw2c46v6qicq1ehah6mfv5j8bez3lba3tfs38l3z6zuflghxwvdwuvfsv3jn3r2x9aht1bb77s3d83yvua4hn8kqqm2x4ea8sdh8wzrvshbe3zk6i5vy74xkjrbcc1khapomf75smu5gry2har73u86d6btax9uv28ku2p3no0uzoxpjqvuz7epqla2phh1alc9rgstsy4nx9i6ajz8pp40vec25ro4jpj3fu7o4qux252wtmpnjvkwb8dd92wkbvn2uj0l4di3k56zwjketpbq6glohvxax == \1\n\n\e\r\m\x\s\x\q\w\o\j\n\o\d\1\3\p\j\m\6\2\1\6\u\7\e\l\5\g\d\o\3\f\e\p\a\s\l\4\l\p\s\7\e\g\q\e\e\c\z\h\7\z\5\d\z\a\q\w\w\6\7\v\6\9\b\7\d\n\b\j\z\w\i\i\z\6\e\u\x\h\u\q\t\a\3\8\x\w\t\0\c\0\y\p\9\9\4\6\b\0\a\2\x\v\z\u\t\1\5\h\z\8\h\6\o\7\c\u\5\z\9\a\j\j\k\r\a\3\z\5\k\u\t\l\i\3\7\t\l\6\3\r\f\2\d\m\q\q\t\p\3\2\y\n\s\j\y\t\d\k\r\n\u\e\z\d\1\x\w\i\r\h\1\9\k\h\f\n\5\7\g\4\7\t\6\4\r\e\6\w\g\x\3\5\f\y\b\w\o\a\n\1\4\m\p\0\7\d\4\a\u\m\e\n\p\4\t\7\z\6\g\f\w\2\c\4\6\v\6\q\i\c\q\1\e\h\a\h\6\m\f\v\5\j\8\b\e\z\3\l\b\a\3\t\f\s\3\8\l\3\z\6\z\u\f\l\g\h\x\w\v\d\w\u\v\f\s\v\3\j\n\3\r\2\x\9\a\h\t\1\b\b\7\7\s\3\d\8\3\y\v\u\a\4\h\n\8\k\q\q\m\2\x\4\e\a\8\s\d\h\8\w\z\r\v\s\h\b\e\3\z\k\6\i\5\v\y\7\4\x\k\j\r\b\c\c\1\k\h\a\p\o\m\f\7\5\s\m\u\5\g\r\y\2\h\a\r\7\3\u\8\6\d\6\b\t\a\x\9\u\v\2\8\k\u\2\p\3\n\o\0\u\z\o\x\p\j\q\v\u\z\7\e\p\q\l\a\2\p\h\h\1\a\l\c\9\r\g\s\t\s\y\4\n\x\9\i\6\a\j\z\8\p\p\4\0\v\e\c\2\5\r\o\4\j\p\j\3\f\u\7\o\4\q\u\x\2\5\2\w\t\m\p\n\j\v\k\w\b\8\d\d\9\2\w\k\b\v\n\2\u\j\0\l\4\d\i\3\k\5\6\z\w\j\k\e\t\p\b\q\6\g\l\o\h\v\x\a\x ]] 00:08:35.050 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:35.050 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:35.050 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:35.050 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:35.050 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.050 20:36:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:35.310 [2024-11-26 20:36:30.046605] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:35.310 [2024-11-26 20:36:30.046991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60555 ] 00:08:35.310 [2024-11-26 20:36:30.205343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.310 [2024-11-26 20:36:30.295873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.570 [2024-11-26 20:36:30.384151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.570  [2024-11-26T20:36:30.834Z] Copying: 512/512 [B] (average 500 kBps) 00:08:35.841 00:08:35.841 20:36:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ h8r4vpzzrwp1olpj7azm4b8ijvr5ltair62h5p5jc3qmsnhjdhsztsimhhbw81pllg9n1r1mopf4hqgbelqluvu08an4okx8tuskjzqgyfhcxl8cabwhmerk9634bv891212mk8a88k2b6jaxi186gcg6txjwsipt1dhuoxoqe65vy1ii2ndcrz6vc5p4l7m7z94grzbjwgug788okbfla5bo13y5is419bwlolrh4qye3fkrvdd6a9m7bfb73oo5gm8w89kngkbga15fbdok47xbvlrpxi4ivilw73nyygpwzjxbo7u8boo7k92jp340megxzuj9wusg0nwsw1cnyq7t056jpm5f69mv2o2l8ziys9hyj9rzqsdoekdxuzvcf6ggo806idctzp8j22eg9e7efh7gs75ars2pd4p0dugu53xc1ty1sefa3hcibmpdsn16gq8uexugiz2ndvrv5yx23vdw3j1ak96uvck9x70zh6u79b0qjr8zqjcf159 == \h\8\r\4\v\p\z\z\r\w\p\1\o\l\p\j\7\a\z\m\4\b\8\i\j\v\r\5\l\t\a\i\r\6\2\h\5\p\5\j\c\3\q\m\s\n\h\j\d\h\s\z\t\s\i\m\h\h\b\w\8\1\p\l\l\g\9\n\1\r\1\m\o\p\f\4\h\q\g\b\e\l\q\l\u\v\u\0\8\a\n\4\o\k\x\8\t\u\s\k\j\z\q\g\y\f\h\c\x\l\8\c\a\b\w\h\m\e\r\k\9\6\3\4\b\v\8\9\1\2\1\2\m\k\8\a\8\8\k\2\b\6\j\a\x\i\1\8\6\g\c\g\6\t\x\j\w\s\i\p\t\1\d\h\u\o\x\o\q\e\6\5\v\y\1\i\i\2\n\d\c\r\z\6\v\c\5\p\4\l\7\m\7\z\9\4\g\r\z\b\j\w\g\u\g\7\8\8\o\k\b\f\l\a\5\b\o\1\3\y\5\i\s\4\1\9\b\w\l\o\l\r\h\4\q\y\e\3\f\k\r\v\d\d\6\a\9\m\7\b\f\b\7\3\o\o\5\g\m\8\w\8\9\k\n\g\k\b\g\a\1\5\f\b\d\o\k\4\7\x\b\v\l\r\p\x\i\4\i\v\i\l\w\7\3\n\y\y\g\p\w\z\j\x\b\o\7\u\8\b\o\o\7\k\9\2\j\p\3\4\0\m\e\g\x\z\u\j\9\w\u\s\g\0\n\w\s\w\1\c\n\y\q\7\t\0\5\6\j\p\m\5\f\6\9\m\v\2\o\2\l\8\z\i\y\s\9\h\y\j\9\r\z\q\s\d\o\e\k\d\x\u\z\v\c\f\6\g\g\o\8\0\6\i\d\c\t\z\p\8\j\2\2\e\g\9\e\7\e\f\h\7\g\s\7\5\a\r\s\2\p\d\4\p\0\d\u\g\u\5\3\x\c\1\t\y\1\s\e\f\a\3\h\c\i\b\m\p\d\s\n\1\6\g\q\8\u\e\x\u\g\i\z\2\n\d\v\r\v\5\y\x\2\3\v\d\w\3\j\1\a\k\9\6\u\v\c\k\9\x\7\0\z\h\6\u\7\9\b\0\q\j\r\8\z\q\j\c\f\1\5\9 ]] 00:08:35.841 20:36:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.841 20:36:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:35.841 [2024-11-26 20:36:30.760857] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:35.841 [2024-11-26 20:36:30.760944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:08:36.101 [2024-11-26 20:36:30.901295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.101 [2024-11-26 20:36:30.980110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.101 [2024-11-26 20:36:31.061243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.360  [2024-11-26T20:36:31.611Z] Copying: 512/512 [B] (average 500 kBps) 00:08:36.618 00:08:36.618 20:36:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ h8r4vpzzrwp1olpj7azm4b8ijvr5ltair62h5p5jc3qmsnhjdhsztsimhhbw81pllg9n1r1mopf4hqgbelqluvu08an4okx8tuskjzqgyfhcxl8cabwhmerk9634bv891212mk8a88k2b6jaxi186gcg6txjwsipt1dhuoxoqe65vy1ii2ndcrz6vc5p4l7m7z94grzbjwgug788okbfla5bo13y5is419bwlolrh4qye3fkrvdd6a9m7bfb73oo5gm8w89kngkbga15fbdok47xbvlrpxi4ivilw73nyygpwzjxbo7u8boo7k92jp340megxzuj9wusg0nwsw1cnyq7t056jpm5f69mv2o2l8ziys9hyj9rzqsdoekdxuzvcf6ggo806idctzp8j22eg9e7efh7gs75ars2pd4p0dugu53xc1ty1sefa3hcibmpdsn16gq8uexugiz2ndvrv5yx23vdw3j1ak96uvck9x70zh6u79b0qjr8zqjcf159 == \h\8\r\4\v\p\z\z\r\w\p\1\o\l\p\j\7\a\z\m\4\b\8\i\j\v\r\5\l\t\a\i\r\6\2\h\5\p\5\j\c\3\q\m\s\n\h\j\d\h\s\z\t\s\i\m\h\h\b\w\8\1\p\l\l\g\9\n\1\r\1\m\o\p\f\4\h\q\g\b\e\l\q\l\u\v\u\0\8\a\n\4\o\k\x\8\t\u\s\k\j\z\q\g\y\f\h\c\x\l\8\c\a\b\w\h\m\e\r\k\9\6\3\4\b\v\8\9\1\2\1\2\m\k\8\a\8\8\k\2\b\6\j\a\x\i\1\8\6\g\c\g\6\t\x\j\w\s\i\p\t\1\d\h\u\o\x\o\q\e\6\5\v\y\1\i\i\2\n\d\c\r\z\6\v\c\5\p\4\l\7\m\7\z\9\4\g\r\z\b\j\w\g\u\g\7\8\8\o\k\b\f\l\a\5\b\o\1\3\y\5\i\s\4\1\9\b\w\l\o\l\r\h\4\q\y\e\3\f\k\r\v\d\d\6\a\9\m\7\b\f\b\7\3\o\o\5\g\m\8\w\8\9\k\n\g\k\b\g\a\1\5\f\b\d\o\k\4\7\x\b\v\l\r\p\x\i\4\i\v\i\l\w\7\3\n\y\y\g\p\w\z\j\x\b\o\7\u\8\b\o\o\7\k\9\2\j\p\3\4\0\m\e\g\x\z\u\j\9\w\u\s\g\0\n\w\s\w\1\c\n\y\q\7\t\0\5\6\j\p\m\5\f\6\9\m\v\2\o\2\l\8\z\i\y\s\9\h\y\j\9\r\z\q\s\d\o\e\k\d\x\u\z\v\c\f\6\g\g\o\8\0\6\i\d\c\t\z\p\8\j\2\2\e\g\9\e\7\e\f\h\7\g\s\7\5\a\r\s\2\p\d\4\p\0\d\u\g\u\5\3\x\c\1\t\y\1\s\e\f\a\3\h\c\i\b\m\p\d\s\n\1\6\g\q\8\u\e\x\u\g\i\z\2\n\d\v\r\v\5\y\x\2\3\v\d\w\3\j\1\a\k\9\6\u\v\c\k\9\x\7\0\z\h\6\u\7\9\b\0\q\j\r\8\z\q\j\c\f\1\5\9 ]] 00:08:36.618 20:36:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:36.618 20:36:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:36.618 [2024-11-26 20:36:31.443909] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:36.618 [2024-11-26 20:36:31.444026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60580 ] 00:08:36.618 [2024-11-26 20:36:31.593440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.876 [2024-11-26 20:36:31.675971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.876 [2024-11-26 20:36:31.757092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.876  [2024-11-26T20:36:32.128Z] Copying: 512/512 [B] (average 250 kBps) 00:08:37.135 00:08:37.135 20:36:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ h8r4vpzzrwp1olpj7azm4b8ijvr5ltair62h5p5jc3qmsnhjdhsztsimhhbw81pllg9n1r1mopf4hqgbelqluvu08an4okx8tuskjzqgyfhcxl8cabwhmerk9634bv891212mk8a88k2b6jaxi186gcg6txjwsipt1dhuoxoqe65vy1ii2ndcrz6vc5p4l7m7z94grzbjwgug788okbfla5bo13y5is419bwlolrh4qye3fkrvdd6a9m7bfb73oo5gm8w89kngkbga15fbdok47xbvlrpxi4ivilw73nyygpwzjxbo7u8boo7k92jp340megxzuj9wusg0nwsw1cnyq7t056jpm5f69mv2o2l8ziys9hyj9rzqsdoekdxuzvcf6ggo806idctzp8j22eg9e7efh7gs75ars2pd4p0dugu53xc1ty1sefa3hcibmpdsn16gq8uexugiz2ndvrv5yx23vdw3j1ak96uvck9x70zh6u79b0qjr8zqjcf159 == \h\8\r\4\v\p\z\z\r\w\p\1\o\l\p\j\7\a\z\m\4\b\8\i\j\v\r\5\l\t\a\i\r\6\2\h\5\p\5\j\c\3\q\m\s\n\h\j\d\h\s\z\t\s\i\m\h\h\b\w\8\1\p\l\l\g\9\n\1\r\1\m\o\p\f\4\h\q\g\b\e\l\q\l\u\v\u\0\8\a\n\4\o\k\x\8\t\u\s\k\j\z\q\g\y\f\h\c\x\l\8\c\a\b\w\h\m\e\r\k\9\6\3\4\b\v\8\9\1\2\1\2\m\k\8\a\8\8\k\2\b\6\j\a\x\i\1\8\6\g\c\g\6\t\x\j\w\s\i\p\t\1\d\h\u\o\x\o\q\e\6\5\v\y\1\i\i\2\n\d\c\r\z\6\v\c\5\p\4\l\7\m\7\z\9\4\g\r\z\b\j\w\g\u\g\7\8\8\o\k\b\f\l\a\5\b\o\1\3\y\5\i\s\4\1\9\b\w\l\o\l\r\h\4\q\y\e\3\f\k\r\v\d\d\6\a\9\m\7\b\f\b\7\3\o\o\5\g\m\8\w\8\9\k\n\g\k\b\g\a\1\5\f\b\d\o\k\4\7\x\b\v\l\r\p\x\i\4\i\v\i\l\w\7\3\n\y\y\g\p\w\z\j\x\b\o\7\u\8\b\o\o\7\k\9\2\j\p\3\4\0\m\e\g\x\z\u\j\9\w\u\s\g\0\n\w\s\w\1\c\n\y\q\7\t\0\5\6\j\p\m\5\f\6\9\m\v\2\o\2\l\8\z\i\y\s\9\h\y\j\9\r\z\q\s\d\o\e\k\d\x\u\z\v\c\f\6\g\g\o\8\0\6\i\d\c\t\z\p\8\j\2\2\e\g\9\e\7\e\f\h\7\g\s\7\5\a\r\s\2\p\d\4\p\0\d\u\g\u\5\3\x\c\1\t\y\1\s\e\f\a\3\h\c\i\b\m\p\d\s\n\1\6\g\q\8\u\e\x\u\g\i\z\2\n\d\v\r\v\5\y\x\2\3\v\d\w\3\j\1\a\k\9\6\u\v\c\k\9\x\7\0\z\h\6\u\7\9\b\0\q\j\r\8\z\q\j\c\f\1\5\9 ]] 00:08:37.135 20:36:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:37.135 20:36:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:37.394 [2024-11-26 20:36:32.137145] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:37.394 [2024-11-26 20:36:32.137276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:08:37.394 [2024-11-26 20:36:32.294566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.652 [2024-11-26 20:36:32.389722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.652 [2024-11-26 20:36:32.477707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.652  [2024-11-26T20:36:32.903Z] Copying: 512/512 [B] (average 500 kBps) 00:08:37.910 00:08:37.910 ************************************ 00:08:37.910 END TEST dd_flags_misc 00:08:37.910 ************************************ 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ h8r4vpzzrwp1olpj7azm4b8ijvr5ltair62h5p5jc3qmsnhjdhsztsimhhbw81pllg9n1r1mopf4hqgbelqluvu08an4okx8tuskjzqgyfhcxl8cabwhmerk9634bv891212mk8a88k2b6jaxi186gcg6txjwsipt1dhuoxoqe65vy1ii2ndcrz6vc5p4l7m7z94grzbjwgug788okbfla5bo13y5is419bwlolrh4qye3fkrvdd6a9m7bfb73oo5gm8w89kngkbga15fbdok47xbvlrpxi4ivilw73nyygpwzjxbo7u8boo7k92jp340megxzuj9wusg0nwsw1cnyq7t056jpm5f69mv2o2l8ziys9hyj9rzqsdoekdxuzvcf6ggo806idctzp8j22eg9e7efh7gs75ars2pd4p0dugu53xc1ty1sefa3hcibmpdsn16gq8uexugiz2ndvrv5yx23vdw3j1ak96uvck9x70zh6u79b0qjr8zqjcf159 == \h\8\r\4\v\p\z\z\r\w\p\1\o\l\p\j\7\a\z\m\4\b\8\i\j\v\r\5\l\t\a\i\r\6\2\h\5\p\5\j\c\3\q\m\s\n\h\j\d\h\s\z\t\s\i\m\h\h\b\w\8\1\p\l\l\g\9\n\1\r\1\m\o\p\f\4\h\q\g\b\e\l\q\l\u\v\u\0\8\a\n\4\o\k\x\8\t\u\s\k\j\z\q\g\y\f\h\c\x\l\8\c\a\b\w\h\m\e\r\k\9\6\3\4\b\v\8\9\1\2\1\2\m\k\8\a\8\8\k\2\b\6\j\a\x\i\1\8\6\g\c\g\6\t\x\j\w\s\i\p\t\1\d\h\u\o\x\o\q\e\6\5\v\y\1\i\i\2\n\d\c\r\z\6\v\c\5\p\4\l\7\m\7\z\9\4\g\r\z\b\j\w\g\u\g\7\8\8\o\k\b\f\l\a\5\b\o\1\3\y\5\i\s\4\1\9\b\w\l\o\l\r\h\4\q\y\e\3\f\k\r\v\d\d\6\a\9\m\7\b\f\b\7\3\o\o\5\g\m\8\w\8\9\k\n\g\k\b\g\a\1\5\f\b\d\o\k\4\7\x\b\v\l\r\p\x\i\4\i\v\i\l\w\7\3\n\y\y\g\p\w\z\j\x\b\o\7\u\8\b\o\o\7\k\9\2\j\p\3\4\0\m\e\g\x\z\u\j\9\w\u\s\g\0\n\w\s\w\1\c\n\y\q\7\t\0\5\6\j\p\m\5\f\6\9\m\v\2\o\2\l\8\z\i\y\s\9\h\y\j\9\r\z\q\s\d\o\e\k\d\x\u\z\v\c\f\6\g\g\o\8\0\6\i\d\c\t\z\p\8\j\2\2\e\g\9\e\7\e\f\h\7\g\s\7\5\a\r\s\2\p\d\4\p\0\d\u\g\u\5\3\x\c\1\t\y\1\s\e\f\a\3\h\c\i\b\m\p\d\s\n\1\6\g\q\8\u\e\x\u\g\i\z\2\n\d\v\r\v\5\y\x\2\3\v\d\w\3\j\1\a\k\9\6\u\v\c\k\9\x\7\0\z\h\6\u\7\9\b\0\q\j\r\8\z\q\j\c\f\1\5\9 ]] 00:08:37.910 00:08:37.910 real 0m5.847s 00:08:37.910 user 0m3.303s 00:08:37.910 sys 0m3.359s 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:37.910 * Second test run, disabling liburing, forcing AIO 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.910 20:36:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:37.910 ************************************ 00:08:37.911 START TEST dd_flag_append_forced_aio 00:08:37.911 ************************************ 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=4gzhcqeyr5a6wgbonjondtea9d621ghh 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=o85ox8v605p2zjor3qpoc3r6vnlhgl3o 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 4gzhcqeyr5a6wgbonjondtea9d621ghh 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s o85ox8v605p2zjor3qpoc3r6vnlhgl3o 00:08:37.911 20:36:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:38.170 [2024-11-26 20:36:32.936794] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:38.170 [2024-11-26 20:36:32.936899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:08:38.170 [2024-11-26 20:36:33.082304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.428 [2024-11-26 20:36:33.167954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.428 [2024-11-26 20:36:33.251580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.429  [2024-11-26T20:36:33.680Z] Copying: 32/32 [B] (average 31 kBps) 00:08:38.687 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ o85ox8v605p2zjor3qpoc3r6vnlhgl3o4gzhcqeyr5a6wgbonjondtea9d621ghh == \o\8\5\o\x\8\v\6\0\5\p\2\z\j\o\r\3\q\p\o\c\3\r\6\v\n\l\h\g\l\3\o\4\g\z\h\c\q\e\y\r\5\a\6\w\g\b\o\n\j\o\n\d\t\e\a\9\d\6\2\1\g\h\h ]] 00:08:38.687 00:08:38.687 real 0m0.728s 00:08:38.687 user 0m0.402s 00:08:38.687 sys 0m0.205s 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:38.687 ************************************ 00:08:38.687 END TEST dd_flag_append_forced_aio 00:08:38.687 ************************************ 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:38.687 ************************************ 00:08:38.687 START TEST dd_flag_directory_forced_aio 00:08:38.687 ************************************ 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.687 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.946 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.946 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.946 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.946 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.946 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.946 20:36:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.946 [2024-11-26 20:36:33.754777] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:38.946 [2024-11-26 20:36:33.754932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60650 ] 00:08:38.946 [2024-11-26 20:36:33.921924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.205 [2024-11-26 20:36:34.012791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.205 [2024-11-26 20:36:34.099515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.205 [2024-11-26 20:36:34.161289] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:39.205 [2024-11-26 20:36:34.161364] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:39.205 [2024-11-26 20:36:34.161389] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.463 [2024-11-26 20:36:34.361421] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.724 20:36:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:39.724 [2024-11-26 20:36:34.528019] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:39.724 [2024-11-26 20:36:34.528141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60665 ] 00:08:39.724 [2024-11-26 20:36:34.678078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.983 [2024-11-26 20:36:34.764141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.983 [2024-11-26 20:36:34.849990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.983 [2024-11-26 20:36:34.909787] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:39.983 [2024-11-26 20:36:34.909844] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:39.983 [2024-11-26 20:36:34.909865] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.241 [2024-11-26 20:36:35.101508] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.241 00:08:40.241 real 0m1.514s 00:08:40.241 user 0m0.852s 00:08:40.241 sys 0m0.447s 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.241 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:40.241 ************************************ 00:08:40.241 END TEST dd_flag_directory_forced_aio 00:08:40.241 ************************************ 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:40.499 ************************************ 00:08:40.499 START TEST dd_flag_nofollow_forced_aio 00:08:40.499 ************************************ 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.499 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.500 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.500 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.500 [2024-11-26 20:36:35.308985] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:40.500 [2024-11-26 20:36:35.309282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60693 ] 00:08:40.500 [2024-11-26 20:36:35.452116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.758 [2024-11-26 20:36:35.534675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.758 [2024-11-26 20:36:35.617082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.758 [2024-11-26 20:36:35.674914] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:40.758 [2024-11-26 20:36:35.674983] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:40.758 [2024-11-26 20:36:35.675004] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.016 [2024-11-26 20:36:35.868688] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:41.016 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.017 20:36:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.307 [2024-11-26 20:36:36.009699] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:41.307 [2024-11-26 20:36:36.010030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60707 ] 00:08:41.307 [2024-11-26 20:36:36.165253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.307 [2024-11-26 20:36:36.259200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.565 [2024-11-26 20:36:36.350938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.566 [2024-11-26 20:36:36.413690] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:41.566 [2024-11-26 20:36:36.413766] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:41.566 [2024-11-26 20:36:36.413795] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.824 [2024-11-26 20:36:36.610868] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:41.824 20:36:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.824 [2024-11-26 20:36:36.767197] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:41.824 [2024-11-26 20:36:36.767566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:08:42.083 [2024-11-26 20:36:36.927651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.083 [2024-11-26 20:36:37.020389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.345 [2024-11-26 20:36:37.109063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.345  [2024-11-26T20:36:37.597Z] Copying: 512/512 [B] (average 500 kBps) 00:08:42.604 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ gs4lr9kxyngdpn79inyunanwao0w7incn6bi48rbe1f66eppgdayohw3xkmkeyyb5x57u5vs4xny6cby9xgehohwn0tmnm0a7b3j621pcc6kwwxns5eff21bejw0f8738o2y5jxniif5tqihakx0azvotupbk2xbt73jgiueuyh0obj2gbakm57mzy4xlb8i4zyhzkzt3yzk8ess0f33ukw2q7g06om5fkovd2lf7zh2r1ncfs6krvnaeiql7m5s9hybz7k8jn4c18epd1ydsvbveuofhhrltmt3ukgk49pp1qm8rsbz3t41ucvkh2xmk8uz7elfx55vxb83l8xubvo7fajgh6t7q3bogwojydanj471pkydc0zbdtzrhxv3xnmviy2fj8bstlr6bm3f6dlwv4krx5teaus87riyqt2lfn5z0yscufkkba8ylv1m0yh3aeqtyvx98ksr28k1miy80gpmo0c9bc36nduel1nnfevct3a224itatjdt7sp == \g\s\4\l\r\9\k\x\y\n\g\d\p\n\7\9\i\n\y\u\n\a\n\w\a\o\0\w\7\i\n\c\n\6\b\i\4\8\r\b\e\1\f\6\6\e\p\p\g\d\a\y\o\h\w\3\x\k\m\k\e\y\y\b\5\x\5\7\u\5\v\s\4\x\n\y\6\c\b\y\9\x\g\e\h\o\h\w\n\0\t\m\n\m\0\a\7\b\3\j\6\2\1\p\c\c\6\k\w\w\x\n\s\5\e\f\f\2\1\b\e\j\w\0\f\8\7\3\8\o\2\y\5\j\x\n\i\i\f\5\t\q\i\h\a\k\x\0\a\z\v\o\t\u\p\b\k\2\x\b\t\7\3\j\g\i\u\e\u\y\h\0\o\b\j\2\g\b\a\k\m\5\7\m\z\y\4\x\l\b\8\i\4\z\y\h\z\k\z\t\3\y\z\k\8\e\s\s\0\f\3\3\u\k\w\2\q\7\g\0\6\o\m\5\f\k\o\v\d\2\l\f\7\z\h\2\r\1\n\c\f\s\6\k\r\v\n\a\e\i\q\l\7\m\5\s\9\h\y\b\z\7\k\8\j\n\4\c\1\8\e\p\d\1\y\d\s\v\b\v\e\u\o\f\h\h\r\l\t\m\t\3\u\k\g\k\4\9\p\p\1\q\m\8\r\s\b\z\3\t\4\1\u\c\v\k\h\2\x\m\k\8\u\z\7\e\l\f\x\5\5\v\x\b\8\3\l\8\x\u\b\v\o\7\f\a\j\g\h\6\t\7\q\3\b\o\g\w\o\j\y\d\a\n\j\4\7\1\p\k\y\d\c\0\z\b\d\t\z\r\h\x\v\3\x\n\m\v\i\y\2\f\j\8\b\s\t\l\r\6\b\m\3\f\6\d\l\w\v\4\k\r\x\5\t\e\a\u\s\8\7\r\i\y\q\t\2\l\f\n\5\z\0\y\s\c\u\f\k\k\b\a\8\y\l\v\1\m\0\y\h\3\a\e\q\t\y\v\x\9\8\k\s\r\2\8\k\1\m\i\y\8\0\g\p\m\o\0\c\9\b\c\3\6\n\d\u\e\l\1\n\n\f\e\v\c\t\3\a\2\2\4\i\t\a\t\j\d\t\7\s\p ]] 00:08:42.604 00:08:42.604 real 0m2.247s 00:08:42.604 user 0m1.283s 00:08:42.604 sys 0m0.614s 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:42.604 ************************************ 00:08:42.604 END TEST dd_flag_nofollow_forced_aio 00:08:42.604 ************************************ 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:42.604 ************************************ 00:08:42.604 START TEST dd_flag_noatime_forced_aio 00:08:42.604 ************************************ 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732653397 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732653397 00:08:42.604 20:36:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:43.982 20:36:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.982 [2024-11-26 20:36:38.639346] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:43.982 [2024-11-26 20:36:38.639727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60762 ] 00:08:43.982 [2024-11-26 20:36:38.793424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.982 [2024-11-26 20:36:38.889380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.240 [2024-11-26 20:36:38.977260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.240  [2024-11-26T20:36:39.491Z] Copying: 512/512 [B] (average 500 kBps) 00:08:44.498 00:08:44.498 20:36:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:44.498 20:36:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732653397 )) 00:08:44.498 20:36:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.498 20:36:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732653397 )) 00:08:44.498 20:36:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.498 [2024-11-26 20:36:39.414037] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:44.498 [2024-11-26 20:36:39.414134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60773 ] 00:08:44.756 [2024-11-26 20:36:39.569614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.757 [2024-11-26 20:36:39.660838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.014 [2024-11-26 20:36:39.747658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.014  [2024-11-26T20:36:40.265Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.272 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732653399 )) 00:08:45.272 00:08:45.272 real 0m2.571s 00:08:45.272 user 0m0.893s 00:08:45.272 sys 0m0.426s 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.272 ************************************ 00:08:45.272 END TEST dd_flag_noatime_forced_aio 00:08:45.272 ************************************ 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:45.272 ************************************ 00:08:45.272 START TEST dd_flags_misc_forced_aio 00:08:45.272 ************************************ 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:45.272 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:45.272 [2024-11-26 20:36:40.245094] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:45.272 [2024-11-26 20:36:40.245236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60804 ] 00:08:45.531 [2024-11-26 20:36:40.408809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.813 [2024-11-26 20:36:40.523398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.813 [2024-11-26 20:36:40.611480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.813  [2024-11-26T20:36:41.065Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.072 00:08:46.072 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ smykvlumo4mky197zqu6lf9rmol6ogl4h5h7dlts6jwtgcl3uu77d02ekksi0mgtbn4h6ck64nd4m6nk1cbneugifknn24cp5g86hxiw8izd99ph7cguoe8mlb2j08gzxbhru52udgfh4ah22bkdfmqwx41w0bb0iao0o6je9iygrvaxc17024tcv4kxoty2spqoi71gmzmv6zr5cx0xg9ppz0ezdgdgiqgzh9futso8lv9susf7nq1mlyz26gomdqb2rpz50or320lr94jrjgs1x9jaxze185eucpx3p8y7fu4g3tph39au820fupxbgqxrtrttadj4mrqjwsycn1nr8deb18jymtyvgasy4ruqmzagitzhfx9qc8j27lbbyh9n3i207gcodp2zwmhyt3bfw838g9gdwfssugznxh3zybmbymk2cet5ys6htiyca811jegmt0ru5d87ezdjglxrnptpsw06u1zpemjsb1eqlgzjnji5kbus3jolx838 == \s\m\y\k\v\l\u\m\o\4\m\k\y\1\9\7\z\q\u\6\l\f\9\r\m\o\l\6\o\g\l\4\h\5\h\7\d\l\t\s\6\j\w\t\g\c\l\3\u\u\7\7\d\0\2\e\k\k\s\i\0\m\g\t\b\n\4\h\6\c\k\6\4\n\d\4\m\6\n\k\1\c\b\n\e\u\g\i\f\k\n\n\2\4\c\p\5\g\8\6\h\x\i\w\8\i\z\d\9\9\p\h\7\c\g\u\o\e\8\m\l\b\2\j\0\8\g\z\x\b\h\r\u\5\2\u\d\g\f\h\4\a\h\2\2\b\k\d\f\m\q\w\x\4\1\w\0\b\b\0\i\a\o\0\o\6\j\e\9\i\y\g\r\v\a\x\c\1\7\0\2\4\t\c\v\4\k\x\o\t\y\2\s\p\q\o\i\7\1\g\m\z\m\v\6\z\r\5\c\x\0\x\g\9\p\p\z\0\e\z\d\g\d\g\i\q\g\z\h\9\f\u\t\s\o\8\l\v\9\s\u\s\f\7\n\q\1\m\l\y\z\2\6\g\o\m\d\q\b\2\r\p\z\5\0\o\r\3\2\0\l\r\9\4\j\r\j\g\s\1\x\9\j\a\x\z\e\1\8\5\e\u\c\p\x\3\p\8\y\7\f\u\4\g\3\t\p\h\3\9\a\u\8\2\0\f\u\p\x\b\g\q\x\r\t\r\t\t\a\d\j\4\m\r\q\j\w\s\y\c\n\1\n\r\8\d\e\b\1\8\j\y\m\t\y\v\g\a\s\y\4\r\u\q\m\z\a\g\i\t\z\h\f\x\9\q\c\8\j\2\7\l\b\b\y\h\9\n\3\i\2\0\7\g\c\o\d\p\2\z\w\m\h\y\t\3\b\f\w\8\3\8\g\9\g\d\w\f\s\s\u\g\z\n\x\h\3\z\y\b\m\b\y\m\k\2\c\e\t\5\y\s\6\h\t\i\y\c\a\8\1\1\j\e\g\m\t\0\r\u\5\d\8\7\e\z\d\j\g\l\x\r\n\p\t\p\s\w\0\6\u\1\z\p\e\m\j\s\b\1\e\q\l\g\z\j\n\j\i\5\k\b\u\s\3\j\o\l\x\8\3\8 ]] 00:08:46.072 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:46.072 20:36:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:46.072 [2024-11-26 20:36:41.022864] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:46.073 [2024-11-26 20:36:41.023214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:08:46.331 [2024-11-26 20:36:41.167993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.331 [2024-11-26 20:36:41.258904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.589 [2024-11-26 20:36:41.342112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.589  [2024-11-26T20:36:41.842Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.849 00:08:46.849 20:36:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ smykvlumo4mky197zqu6lf9rmol6ogl4h5h7dlts6jwtgcl3uu77d02ekksi0mgtbn4h6ck64nd4m6nk1cbneugifknn24cp5g86hxiw8izd99ph7cguoe8mlb2j08gzxbhru52udgfh4ah22bkdfmqwx41w0bb0iao0o6je9iygrvaxc17024tcv4kxoty2spqoi71gmzmv6zr5cx0xg9ppz0ezdgdgiqgzh9futso8lv9susf7nq1mlyz26gomdqb2rpz50or320lr94jrjgs1x9jaxze185eucpx3p8y7fu4g3tph39au820fupxbgqxrtrttadj4mrqjwsycn1nr8deb18jymtyvgasy4ruqmzagitzhfx9qc8j27lbbyh9n3i207gcodp2zwmhyt3bfw838g9gdwfssugznxh3zybmbymk2cet5ys6htiyca811jegmt0ru5d87ezdjglxrnptpsw06u1zpemjsb1eqlgzjnji5kbus3jolx838 == \s\m\y\k\v\l\u\m\o\4\m\k\y\1\9\7\z\q\u\6\l\f\9\r\m\o\l\6\o\g\l\4\h\5\h\7\d\l\t\s\6\j\w\t\g\c\l\3\u\u\7\7\d\0\2\e\k\k\s\i\0\m\g\t\b\n\4\h\6\c\k\6\4\n\d\4\m\6\n\k\1\c\b\n\e\u\g\i\f\k\n\n\2\4\c\p\5\g\8\6\h\x\i\w\8\i\z\d\9\9\p\h\7\c\g\u\o\e\8\m\l\b\2\j\0\8\g\z\x\b\h\r\u\5\2\u\d\g\f\h\4\a\h\2\2\b\k\d\f\m\q\w\x\4\1\w\0\b\b\0\i\a\o\0\o\6\j\e\9\i\y\g\r\v\a\x\c\1\7\0\2\4\t\c\v\4\k\x\o\t\y\2\s\p\q\o\i\7\1\g\m\z\m\v\6\z\r\5\c\x\0\x\g\9\p\p\z\0\e\z\d\g\d\g\i\q\g\z\h\9\f\u\t\s\o\8\l\v\9\s\u\s\f\7\n\q\1\m\l\y\z\2\6\g\o\m\d\q\b\2\r\p\z\5\0\o\r\3\2\0\l\r\9\4\j\r\j\g\s\1\x\9\j\a\x\z\e\1\8\5\e\u\c\p\x\3\p\8\y\7\f\u\4\g\3\t\p\h\3\9\a\u\8\2\0\f\u\p\x\b\g\q\x\r\t\r\t\t\a\d\j\4\m\r\q\j\w\s\y\c\n\1\n\r\8\d\e\b\1\8\j\y\m\t\y\v\g\a\s\y\4\r\u\q\m\z\a\g\i\t\z\h\f\x\9\q\c\8\j\2\7\l\b\b\y\h\9\n\3\i\2\0\7\g\c\o\d\p\2\z\w\m\h\y\t\3\b\f\w\8\3\8\g\9\g\d\w\f\s\s\u\g\z\n\x\h\3\z\y\b\m\b\y\m\k\2\c\e\t\5\y\s\6\h\t\i\y\c\a\8\1\1\j\e\g\m\t\0\r\u\5\d\8\7\e\z\d\j\g\l\x\r\n\p\t\p\s\w\0\6\u\1\z\p\e\m\j\s\b\1\e\q\l\g\z\j\n\j\i\5\k\b\u\s\3\j\o\l\x\8\3\8 ]] 00:08:46.849 20:36:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:46.849 20:36:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:46.849 [2024-11-26 20:36:41.767228] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:46.849 [2024-11-26 20:36:41.767369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60826 ] 00:08:47.107 [2024-11-26 20:36:41.928630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.107 [2024-11-26 20:36:42.013394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.107 [2024-11-26 20:36:42.097136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.366  [2024-11-26T20:36:42.617Z] Copying: 512/512 [B] (average 250 kBps) 00:08:47.624 00:08:47.624 20:36:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ smykvlumo4mky197zqu6lf9rmol6ogl4h5h7dlts6jwtgcl3uu77d02ekksi0mgtbn4h6ck64nd4m6nk1cbneugifknn24cp5g86hxiw8izd99ph7cguoe8mlb2j08gzxbhru52udgfh4ah22bkdfmqwx41w0bb0iao0o6je9iygrvaxc17024tcv4kxoty2spqoi71gmzmv6zr5cx0xg9ppz0ezdgdgiqgzh9futso8lv9susf7nq1mlyz26gomdqb2rpz50or320lr94jrjgs1x9jaxze185eucpx3p8y7fu4g3tph39au820fupxbgqxrtrttadj4mrqjwsycn1nr8deb18jymtyvgasy4ruqmzagitzhfx9qc8j27lbbyh9n3i207gcodp2zwmhyt3bfw838g9gdwfssugznxh3zybmbymk2cet5ys6htiyca811jegmt0ru5d87ezdjglxrnptpsw06u1zpemjsb1eqlgzjnji5kbus3jolx838 == \s\m\y\k\v\l\u\m\o\4\m\k\y\1\9\7\z\q\u\6\l\f\9\r\m\o\l\6\o\g\l\4\h\5\h\7\d\l\t\s\6\j\w\t\g\c\l\3\u\u\7\7\d\0\2\e\k\k\s\i\0\m\g\t\b\n\4\h\6\c\k\6\4\n\d\4\m\6\n\k\1\c\b\n\e\u\g\i\f\k\n\n\2\4\c\p\5\g\8\6\h\x\i\w\8\i\z\d\9\9\p\h\7\c\g\u\o\e\8\m\l\b\2\j\0\8\g\z\x\b\h\r\u\5\2\u\d\g\f\h\4\a\h\2\2\b\k\d\f\m\q\w\x\4\1\w\0\b\b\0\i\a\o\0\o\6\j\e\9\i\y\g\r\v\a\x\c\1\7\0\2\4\t\c\v\4\k\x\o\t\y\2\s\p\q\o\i\7\1\g\m\z\m\v\6\z\r\5\c\x\0\x\g\9\p\p\z\0\e\z\d\g\d\g\i\q\g\z\h\9\f\u\t\s\o\8\l\v\9\s\u\s\f\7\n\q\1\m\l\y\z\2\6\g\o\m\d\q\b\2\r\p\z\5\0\o\r\3\2\0\l\r\9\4\j\r\j\g\s\1\x\9\j\a\x\z\e\1\8\5\e\u\c\p\x\3\p\8\y\7\f\u\4\g\3\t\p\h\3\9\a\u\8\2\0\f\u\p\x\b\g\q\x\r\t\r\t\t\a\d\j\4\m\r\q\j\w\s\y\c\n\1\n\r\8\d\e\b\1\8\j\y\m\t\y\v\g\a\s\y\4\r\u\q\m\z\a\g\i\t\z\h\f\x\9\q\c\8\j\2\7\l\b\b\y\h\9\n\3\i\2\0\7\g\c\o\d\p\2\z\w\m\h\y\t\3\b\f\w\8\3\8\g\9\g\d\w\f\s\s\u\g\z\n\x\h\3\z\y\b\m\b\y\m\k\2\c\e\t\5\y\s\6\h\t\i\y\c\a\8\1\1\j\e\g\m\t\0\r\u\5\d\8\7\e\z\d\j\g\l\x\r\n\p\t\p\s\w\0\6\u\1\z\p\e\m\j\s\b\1\e\q\l\g\z\j\n\j\i\5\k\b\u\s\3\j\o\l\x\8\3\8 ]] 00:08:47.624 20:36:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.624 20:36:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:47.624 [2024-11-26 20:36:42.519192] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:47.624 [2024-11-26 20:36:42.519309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60834 ] 00:08:47.883 [2024-11-26 20:36:42.668389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.883 [2024-11-26 20:36:42.754385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.883 [2024-11-26 20:36:42.837964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.141  [2024-11-26T20:36:43.394Z] Copying: 512/512 [B] (average 250 kBps) 00:08:48.401 00:08:48.401 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ smykvlumo4mky197zqu6lf9rmol6ogl4h5h7dlts6jwtgcl3uu77d02ekksi0mgtbn4h6ck64nd4m6nk1cbneugifknn24cp5g86hxiw8izd99ph7cguoe8mlb2j08gzxbhru52udgfh4ah22bkdfmqwx41w0bb0iao0o6je9iygrvaxc17024tcv4kxoty2spqoi71gmzmv6zr5cx0xg9ppz0ezdgdgiqgzh9futso8lv9susf7nq1mlyz26gomdqb2rpz50or320lr94jrjgs1x9jaxze185eucpx3p8y7fu4g3tph39au820fupxbgqxrtrttadj4mrqjwsycn1nr8deb18jymtyvgasy4ruqmzagitzhfx9qc8j27lbbyh9n3i207gcodp2zwmhyt3bfw838g9gdwfssugznxh3zybmbymk2cet5ys6htiyca811jegmt0ru5d87ezdjglxrnptpsw06u1zpemjsb1eqlgzjnji5kbus3jolx838 == \s\m\y\k\v\l\u\m\o\4\m\k\y\1\9\7\z\q\u\6\l\f\9\r\m\o\l\6\o\g\l\4\h\5\h\7\d\l\t\s\6\j\w\t\g\c\l\3\u\u\7\7\d\0\2\e\k\k\s\i\0\m\g\t\b\n\4\h\6\c\k\6\4\n\d\4\m\6\n\k\1\c\b\n\e\u\g\i\f\k\n\n\2\4\c\p\5\g\8\6\h\x\i\w\8\i\z\d\9\9\p\h\7\c\g\u\o\e\8\m\l\b\2\j\0\8\g\z\x\b\h\r\u\5\2\u\d\g\f\h\4\a\h\2\2\b\k\d\f\m\q\w\x\4\1\w\0\b\b\0\i\a\o\0\o\6\j\e\9\i\y\g\r\v\a\x\c\1\7\0\2\4\t\c\v\4\k\x\o\t\y\2\s\p\q\o\i\7\1\g\m\z\m\v\6\z\r\5\c\x\0\x\g\9\p\p\z\0\e\z\d\g\d\g\i\q\g\z\h\9\f\u\t\s\o\8\l\v\9\s\u\s\f\7\n\q\1\m\l\y\z\2\6\g\o\m\d\q\b\2\r\p\z\5\0\o\r\3\2\0\l\r\9\4\j\r\j\g\s\1\x\9\j\a\x\z\e\1\8\5\e\u\c\p\x\3\p\8\y\7\f\u\4\g\3\t\p\h\3\9\a\u\8\2\0\f\u\p\x\b\g\q\x\r\t\r\t\t\a\d\j\4\m\r\q\j\w\s\y\c\n\1\n\r\8\d\e\b\1\8\j\y\m\t\y\v\g\a\s\y\4\r\u\q\m\z\a\g\i\t\z\h\f\x\9\q\c\8\j\2\7\l\b\b\y\h\9\n\3\i\2\0\7\g\c\o\d\p\2\z\w\m\h\y\t\3\b\f\w\8\3\8\g\9\g\d\w\f\s\s\u\g\z\n\x\h\3\z\y\b\m\b\y\m\k\2\c\e\t\5\y\s\6\h\t\i\y\c\a\8\1\1\j\e\g\m\t\0\r\u\5\d\8\7\e\z\d\j\g\l\x\r\n\p\t\p\s\w\0\6\u\1\z\p\e\m\j\s\b\1\e\q\l\g\z\j\n\j\i\5\k\b\u\s\3\j\o\l\x\8\3\8 ]] 00:08:48.401 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:48.401 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:48.401 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:48.401 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:48.401 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.401 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:48.401 [2024-11-26 20:36:43.260234] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:48.401 [2024-11-26 20:36:43.260346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60841 ] 00:08:48.660 [2024-11-26 20:36:43.411182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.660 [2024-11-26 20:36:43.492200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.660 [2024-11-26 20:36:43.572972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.660  [2024-11-26T20:36:43.913Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.920 00:08:49.180 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f7jgl5z7ujyb4upwiumq84mt5n8mjc153nmsqzzynqaictcu0tfrxw7flas4z08t4r9tw35d6jsxjp0if4q82ypm1kf1jjzixymyzzkfpe0zguvjmhmvue5a3x80vof0s4c0yh6tfq7bcv00b3dz984p5fg7vwwfikqsxilep3co4mkopv616doubrjnaidprkgmgsqdaz36mqemypm7afetffzoc9xmv52puu1txodkkn28kosq7svhxtwt7i8g1ohkfjbfq2hwr9w9gjkipoxhky0fsu316clgkt5jmf7tlpzifnmfo4877y1j4entdps7k6srw8iievvjhqnm9e3mr8ryulbd0qaxwarm9sidnzlcymgncszzijtuv05q2wpj79e9jigrevuzfks9ihokbdsisf2jljod0kg1ggevofvds7zdrx0ztamlwtwrsmquc3ddc9bo0xh68vll66vonb7nqpjnqw6z5an5zee6obll8l71k2u8w2t2c87v == \f\7\j\g\l\5\z\7\u\j\y\b\4\u\p\w\i\u\m\q\8\4\m\t\5\n\8\m\j\c\1\5\3\n\m\s\q\z\z\y\n\q\a\i\c\t\c\u\0\t\f\r\x\w\7\f\l\a\s\4\z\0\8\t\4\r\9\t\w\3\5\d\6\j\s\x\j\p\0\i\f\4\q\8\2\y\p\m\1\k\f\1\j\j\z\i\x\y\m\y\z\z\k\f\p\e\0\z\g\u\v\j\m\h\m\v\u\e\5\a\3\x\8\0\v\o\f\0\s\4\c\0\y\h\6\t\f\q\7\b\c\v\0\0\b\3\d\z\9\8\4\p\5\f\g\7\v\w\w\f\i\k\q\s\x\i\l\e\p\3\c\o\4\m\k\o\p\v\6\1\6\d\o\u\b\r\j\n\a\i\d\p\r\k\g\m\g\s\q\d\a\z\3\6\m\q\e\m\y\p\m\7\a\f\e\t\f\f\z\o\c\9\x\m\v\5\2\p\u\u\1\t\x\o\d\k\k\n\2\8\k\o\s\q\7\s\v\h\x\t\w\t\7\i\8\g\1\o\h\k\f\j\b\f\q\2\h\w\r\9\w\9\g\j\k\i\p\o\x\h\k\y\0\f\s\u\3\1\6\c\l\g\k\t\5\j\m\f\7\t\l\p\z\i\f\n\m\f\o\4\8\7\7\y\1\j\4\e\n\t\d\p\s\7\k\6\s\r\w\8\i\i\e\v\v\j\h\q\n\m\9\e\3\m\r\8\r\y\u\l\b\d\0\q\a\x\w\a\r\m\9\s\i\d\n\z\l\c\y\m\g\n\c\s\z\z\i\j\t\u\v\0\5\q\2\w\p\j\7\9\e\9\j\i\g\r\e\v\u\z\f\k\s\9\i\h\o\k\b\d\s\i\s\f\2\j\l\j\o\d\0\k\g\1\g\g\e\v\o\f\v\d\s\7\z\d\r\x\0\z\t\a\m\l\w\t\w\r\s\m\q\u\c\3\d\d\c\9\b\o\0\x\h\6\8\v\l\l\6\6\v\o\n\b\7\n\q\p\j\n\q\w\6\z\5\a\n\5\z\e\e\6\o\b\l\l\8\l\7\1\k\2\u\8\w\2\t\2\c\8\7\v ]] 00:08:49.180 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.180 20:36:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:49.180 [2024-11-26 20:36:43.971359] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:49.180 [2024-11-26 20:36:43.971474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60854 ] 00:08:49.180 [2024-11-26 20:36:44.122411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.439 [2024-11-26 20:36:44.201366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.439 [2024-11-26 20:36:44.281452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.439  [2024-11-26T20:36:44.690Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.697 00:08:49.697 20:36:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f7jgl5z7ujyb4upwiumq84mt5n8mjc153nmsqzzynqaictcu0tfrxw7flas4z08t4r9tw35d6jsxjp0if4q82ypm1kf1jjzixymyzzkfpe0zguvjmhmvue5a3x80vof0s4c0yh6tfq7bcv00b3dz984p5fg7vwwfikqsxilep3co4mkopv616doubrjnaidprkgmgsqdaz36mqemypm7afetffzoc9xmv52puu1txodkkn28kosq7svhxtwt7i8g1ohkfjbfq2hwr9w9gjkipoxhky0fsu316clgkt5jmf7tlpzifnmfo4877y1j4entdps7k6srw8iievvjhqnm9e3mr8ryulbd0qaxwarm9sidnzlcymgncszzijtuv05q2wpj79e9jigrevuzfks9ihokbdsisf2jljod0kg1ggevofvds7zdrx0ztamlwtwrsmquc3ddc9bo0xh68vll66vonb7nqpjnqw6z5an5zee6obll8l71k2u8w2t2c87v == \f\7\j\g\l\5\z\7\u\j\y\b\4\u\p\w\i\u\m\q\8\4\m\t\5\n\8\m\j\c\1\5\3\n\m\s\q\z\z\y\n\q\a\i\c\t\c\u\0\t\f\r\x\w\7\f\l\a\s\4\z\0\8\t\4\r\9\t\w\3\5\d\6\j\s\x\j\p\0\i\f\4\q\8\2\y\p\m\1\k\f\1\j\j\z\i\x\y\m\y\z\z\k\f\p\e\0\z\g\u\v\j\m\h\m\v\u\e\5\a\3\x\8\0\v\o\f\0\s\4\c\0\y\h\6\t\f\q\7\b\c\v\0\0\b\3\d\z\9\8\4\p\5\f\g\7\v\w\w\f\i\k\q\s\x\i\l\e\p\3\c\o\4\m\k\o\p\v\6\1\6\d\o\u\b\r\j\n\a\i\d\p\r\k\g\m\g\s\q\d\a\z\3\6\m\q\e\m\y\p\m\7\a\f\e\t\f\f\z\o\c\9\x\m\v\5\2\p\u\u\1\t\x\o\d\k\k\n\2\8\k\o\s\q\7\s\v\h\x\t\w\t\7\i\8\g\1\o\h\k\f\j\b\f\q\2\h\w\r\9\w\9\g\j\k\i\p\o\x\h\k\y\0\f\s\u\3\1\6\c\l\g\k\t\5\j\m\f\7\t\l\p\z\i\f\n\m\f\o\4\8\7\7\y\1\j\4\e\n\t\d\p\s\7\k\6\s\r\w\8\i\i\e\v\v\j\h\q\n\m\9\e\3\m\r\8\r\y\u\l\b\d\0\q\a\x\w\a\r\m\9\s\i\d\n\z\l\c\y\m\g\n\c\s\z\z\i\j\t\u\v\0\5\q\2\w\p\j\7\9\e\9\j\i\g\r\e\v\u\z\f\k\s\9\i\h\o\k\b\d\s\i\s\f\2\j\l\j\o\d\0\k\g\1\g\g\e\v\o\f\v\d\s\7\z\d\r\x\0\z\t\a\m\l\w\t\w\r\s\m\q\u\c\3\d\d\c\9\b\o\0\x\h\6\8\v\l\l\6\6\v\o\n\b\7\n\q\p\j\n\q\w\6\z\5\a\n\5\z\e\e\6\o\b\l\l\8\l\7\1\k\2\u\8\w\2\t\2\c\8\7\v ]] 00:08:49.697 20:36:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.698 20:36:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:49.957 [2024-11-26 20:36:44.694150] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:49.957 [2024-11-26 20:36:44.694279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60867 ] 00:08:49.957 [2024-11-26 20:36:44.841689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.957 [2024-11-26 20:36:44.917924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.216 [2024-11-26 20:36:44.998857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.216  [2024-11-26T20:36:45.469Z] Copying: 512/512 [B] (average 500 kBps) 00:08:50.476 00:08:50.477 20:36:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f7jgl5z7ujyb4upwiumq84mt5n8mjc153nmsqzzynqaictcu0tfrxw7flas4z08t4r9tw35d6jsxjp0if4q82ypm1kf1jjzixymyzzkfpe0zguvjmhmvue5a3x80vof0s4c0yh6tfq7bcv00b3dz984p5fg7vwwfikqsxilep3co4mkopv616doubrjnaidprkgmgsqdaz36mqemypm7afetffzoc9xmv52puu1txodkkn28kosq7svhxtwt7i8g1ohkfjbfq2hwr9w9gjkipoxhky0fsu316clgkt5jmf7tlpzifnmfo4877y1j4entdps7k6srw8iievvjhqnm9e3mr8ryulbd0qaxwarm9sidnzlcymgncszzijtuv05q2wpj79e9jigrevuzfks9ihokbdsisf2jljod0kg1ggevofvds7zdrx0ztamlwtwrsmquc3ddc9bo0xh68vll66vonb7nqpjnqw6z5an5zee6obll8l71k2u8w2t2c87v == \f\7\j\g\l\5\z\7\u\j\y\b\4\u\p\w\i\u\m\q\8\4\m\t\5\n\8\m\j\c\1\5\3\n\m\s\q\z\z\y\n\q\a\i\c\t\c\u\0\t\f\r\x\w\7\f\l\a\s\4\z\0\8\t\4\r\9\t\w\3\5\d\6\j\s\x\j\p\0\i\f\4\q\8\2\y\p\m\1\k\f\1\j\j\z\i\x\y\m\y\z\z\k\f\p\e\0\z\g\u\v\j\m\h\m\v\u\e\5\a\3\x\8\0\v\o\f\0\s\4\c\0\y\h\6\t\f\q\7\b\c\v\0\0\b\3\d\z\9\8\4\p\5\f\g\7\v\w\w\f\i\k\q\s\x\i\l\e\p\3\c\o\4\m\k\o\p\v\6\1\6\d\o\u\b\r\j\n\a\i\d\p\r\k\g\m\g\s\q\d\a\z\3\6\m\q\e\m\y\p\m\7\a\f\e\t\f\f\z\o\c\9\x\m\v\5\2\p\u\u\1\t\x\o\d\k\k\n\2\8\k\o\s\q\7\s\v\h\x\t\w\t\7\i\8\g\1\o\h\k\f\j\b\f\q\2\h\w\r\9\w\9\g\j\k\i\p\o\x\h\k\y\0\f\s\u\3\1\6\c\l\g\k\t\5\j\m\f\7\t\l\p\z\i\f\n\m\f\o\4\8\7\7\y\1\j\4\e\n\t\d\p\s\7\k\6\s\r\w\8\i\i\e\v\v\j\h\q\n\m\9\e\3\m\r\8\r\y\u\l\b\d\0\q\a\x\w\a\r\m\9\s\i\d\n\z\l\c\y\m\g\n\c\s\z\z\i\j\t\u\v\0\5\q\2\w\p\j\7\9\e\9\j\i\g\r\e\v\u\z\f\k\s\9\i\h\o\k\b\d\s\i\s\f\2\j\l\j\o\d\0\k\g\1\g\g\e\v\o\f\v\d\s\7\z\d\r\x\0\z\t\a\m\l\w\t\w\r\s\m\q\u\c\3\d\d\c\9\b\o\0\x\h\6\8\v\l\l\6\6\v\o\n\b\7\n\q\p\j\n\q\w\6\z\5\a\n\5\z\e\e\6\o\b\l\l\8\l\7\1\k\2\u\8\w\2\t\2\c\8\7\v ]] 00:08:50.477 20:36:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.477 20:36:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:50.477 [2024-11-26 20:36:45.400921] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:50.477 [2024-11-26 20:36:45.401033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60869 ] 00:08:50.737 [2024-11-26 20:36:45.552133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.737 [2024-11-26 20:36:45.633662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.737 [2024-11-26 20:36:45.714720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.995  [2024-11-26T20:36:46.247Z] Copying: 512/512 [B] (average 500 kBps) 00:08:51.254 00:08:51.254 ************************************ 00:08:51.254 END TEST dd_flags_misc_forced_aio 00:08:51.254 ************************************ 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f7jgl5z7ujyb4upwiumq84mt5n8mjc153nmsqzzynqaictcu0tfrxw7flas4z08t4r9tw35d6jsxjp0if4q82ypm1kf1jjzixymyzzkfpe0zguvjmhmvue5a3x80vof0s4c0yh6tfq7bcv00b3dz984p5fg7vwwfikqsxilep3co4mkopv616doubrjnaidprkgmgsqdaz36mqemypm7afetffzoc9xmv52puu1txodkkn28kosq7svhxtwt7i8g1ohkfjbfq2hwr9w9gjkipoxhky0fsu316clgkt5jmf7tlpzifnmfo4877y1j4entdps7k6srw8iievvjhqnm9e3mr8ryulbd0qaxwarm9sidnzlcymgncszzijtuv05q2wpj79e9jigrevuzfks9ihokbdsisf2jljod0kg1ggevofvds7zdrx0ztamlwtwrsmquc3ddc9bo0xh68vll66vonb7nqpjnqw6z5an5zee6obll8l71k2u8w2t2c87v == \f\7\j\g\l\5\z\7\u\j\y\b\4\u\p\w\i\u\m\q\8\4\m\t\5\n\8\m\j\c\1\5\3\n\m\s\q\z\z\y\n\q\a\i\c\t\c\u\0\t\f\r\x\w\7\f\l\a\s\4\z\0\8\t\4\r\9\t\w\3\5\d\6\j\s\x\j\p\0\i\f\4\q\8\2\y\p\m\1\k\f\1\j\j\z\i\x\y\m\y\z\z\k\f\p\e\0\z\g\u\v\j\m\h\m\v\u\e\5\a\3\x\8\0\v\o\f\0\s\4\c\0\y\h\6\t\f\q\7\b\c\v\0\0\b\3\d\z\9\8\4\p\5\f\g\7\v\w\w\f\i\k\q\s\x\i\l\e\p\3\c\o\4\m\k\o\p\v\6\1\6\d\o\u\b\r\j\n\a\i\d\p\r\k\g\m\g\s\q\d\a\z\3\6\m\q\e\m\y\p\m\7\a\f\e\t\f\f\z\o\c\9\x\m\v\5\2\p\u\u\1\t\x\o\d\k\k\n\2\8\k\o\s\q\7\s\v\h\x\t\w\t\7\i\8\g\1\o\h\k\f\j\b\f\q\2\h\w\r\9\w\9\g\j\k\i\p\o\x\h\k\y\0\f\s\u\3\1\6\c\l\g\k\t\5\j\m\f\7\t\l\p\z\i\f\n\m\f\o\4\8\7\7\y\1\j\4\e\n\t\d\p\s\7\k\6\s\r\w\8\i\i\e\v\v\j\h\q\n\m\9\e\3\m\r\8\r\y\u\l\b\d\0\q\a\x\w\a\r\m\9\s\i\d\n\z\l\c\y\m\g\n\c\s\z\z\i\j\t\u\v\0\5\q\2\w\p\j\7\9\e\9\j\i\g\r\e\v\u\z\f\k\s\9\i\h\o\k\b\d\s\i\s\f\2\j\l\j\o\d\0\k\g\1\g\g\e\v\o\f\v\d\s\7\z\d\r\x\0\z\t\a\m\l\w\t\w\r\s\m\q\u\c\3\d\d\c\9\b\o\0\x\h\6\8\v\l\l\6\6\v\o\n\b\7\n\q\p\j\n\q\w\6\z\5\a\n\5\z\e\e\6\o\b\l\l\8\l\7\1\k\2\u\8\w\2\t\2\c\8\7\v ]] 00:08:51.255 00:08:51.255 real 0m5.878s 00:08:51.255 user 0m3.245s 00:08:51.255 sys 0m1.627s 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:51.255 ************************************ 00:08:51.255 END TEST spdk_dd_posix 00:08:51.255 ************************************ 00:08:51.255 00:08:51.255 real 0m26.354s 00:08:51.255 user 0m13.560s 00:08:51.255 sys 0m9.586s 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.255 20:36:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:51.255 20:36:46 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:51.255 20:36:46 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.255 20:36:46 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.255 20:36:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:51.255 ************************************ 00:08:51.255 START TEST spdk_dd_malloc 00:08:51.255 ************************************ 00:08:51.255 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:51.513 * Looking for test storage... 00:08:51.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.513 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.514 --rc genhtml_branch_coverage=1 00:08:51.514 --rc genhtml_function_coverage=1 00:08:51.514 --rc genhtml_legend=1 00:08:51.514 --rc geninfo_all_blocks=1 00:08:51.514 --rc geninfo_unexecuted_blocks=1 00:08:51.514 00:08:51.514 ' 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.514 --rc genhtml_branch_coverage=1 00:08:51.514 --rc genhtml_function_coverage=1 00:08:51.514 --rc genhtml_legend=1 00:08:51.514 --rc geninfo_all_blocks=1 00:08:51.514 --rc geninfo_unexecuted_blocks=1 00:08:51.514 00:08:51.514 ' 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.514 --rc genhtml_branch_coverage=1 00:08:51.514 --rc genhtml_function_coverage=1 00:08:51.514 --rc genhtml_legend=1 00:08:51.514 --rc geninfo_all_blocks=1 00:08:51.514 --rc geninfo_unexecuted_blocks=1 00:08:51.514 00:08:51.514 ' 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.514 --rc genhtml_branch_coverage=1 00:08:51.514 --rc genhtml_function_coverage=1 00:08:51.514 --rc genhtml_legend=1 00:08:51.514 --rc geninfo_all_blocks=1 00:08:51.514 --rc geninfo_unexecuted_blocks=1 00:08:51.514 00:08:51.514 ' 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 ************************************ 00:08:51.514 START TEST dd_malloc_copy 00:08:51.514 ************************************ 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:51.514 20:36:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:51.514 { 00:08:51.514 "subsystems": [ 00:08:51.514 { 00:08:51.514 "subsystem": "bdev", 00:08:51.514 "config": [ 00:08:51.514 { 00:08:51.514 "params": { 00:08:51.514 "block_size": 512, 00:08:51.514 "num_blocks": 1048576, 00:08:51.514 "name": "malloc0" 00:08:51.514 }, 00:08:51.514 "method": "bdev_malloc_create" 00:08:51.514 }, 00:08:51.514 { 00:08:51.514 "params": { 00:08:51.514 "block_size": 512, 00:08:51.514 "num_blocks": 1048576, 00:08:51.514 "name": "malloc1" 00:08:51.514 }, 00:08:51.514 "method": "bdev_malloc_create" 00:08:51.514 }, 00:08:51.514 { 00:08:51.514 "method": "bdev_wait_for_examine" 00:08:51.514 } 00:08:51.514 ] 00:08:51.514 } 00:08:51.514 ] 00:08:51.514 } 00:08:51.514 [2024-11-26 20:36:46.425280] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:51.514 [2024-11-26 20:36:46.425394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60951 ] 00:08:51.773 [2024-11-26 20:36:46.583686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.773 [2024-11-26 20:36:46.672930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.773 [2024-11-26 20:36:46.759425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.149  [2024-11-26T20:36:49.517Z] Copying: 227/512 [MB] (227 MBps) [2024-11-26T20:36:49.517Z] Copying: 455/512 [MB] (227 MBps) [2024-11-26T20:36:50.083Z] Copying: 512/512 [MB] (average 226 MBps) 00:08:55.090 00:08:55.090 20:36:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:55.090 20:36:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:55.090 20:36:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:55.090 20:36:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:55.090 [2024-11-26 20:36:50.069662] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:55.090 [2024-11-26 20:36:50.069766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61004 ] 00:08:55.090 { 00:08:55.090 "subsystems": [ 00:08:55.090 { 00:08:55.090 "subsystem": "bdev", 00:08:55.090 "config": [ 00:08:55.090 { 00:08:55.090 "params": { 00:08:55.090 "block_size": 512, 00:08:55.090 "num_blocks": 1048576, 00:08:55.090 "name": "malloc0" 00:08:55.090 }, 00:08:55.090 "method": "bdev_malloc_create" 00:08:55.090 }, 00:08:55.090 { 00:08:55.090 "params": { 00:08:55.090 "block_size": 512, 00:08:55.090 "num_blocks": 1048576, 00:08:55.090 "name": "malloc1" 00:08:55.090 }, 00:08:55.090 "method": "bdev_malloc_create" 00:08:55.090 }, 00:08:55.090 { 00:08:55.090 "method": "bdev_wait_for_examine" 00:08:55.090 } 00:08:55.090 ] 00:08:55.090 } 00:08:55.090 ] 00:08:55.090 } 00:08:55.348 [2024-11-26 20:36:50.217989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.348 [2024-11-26 20:36:50.299114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.606 [2024-11-26 20:36:50.382095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.980  [2024-11-26T20:36:52.975Z] Copying: 239/512 [MB] (239 MBps) [2024-11-26T20:36:52.975Z] Copying: 477/512 [MB] (237 MBps) [2024-11-26T20:36:53.913Z] Copying: 512/512 [MB] (average 238 MBps) 00:08:58.920 00:08:58.920 00:08:58.920 real 0m7.195s 00:08:58.920 user 0m6.061s 00:08:58.920 sys 0m0.955s 00:08:58.920 20:36:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.920 20:36:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:58.920 ************************************ 00:08:58.920 END TEST dd_malloc_copy 00:08:58.920 ************************************ 00:08:58.920 ************************************ 00:08:58.920 END TEST spdk_dd_malloc 00:08:58.920 ************************************ 00:08:58.920 00:08:58.920 real 0m7.438s 00:08:58.920 user 0m6.193s 00:08:58.920 sys 0m1.076s 00:08:58.920 20:36:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.920 20:36:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:58.920 20:36:53 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:58.920 20:36:53 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:58.920 20:36:53 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.920 20:36:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:58.920 ************************************ 00:08:58.920 START TEST spdk_dd_bdev_to_bdev 00:08:58.920 ************************************ 00:08:58.920 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:58.920 * Looking for test storage... 00:08:58.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:58.920 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.921 --rc genhtml_branch_coverage=1 00:08:58.921 --rc genhtml_function_coverage=1 00:08:58.921 --rc genhtml_legend=1 00:08:58.921 --rc geninfo_all_blocks=1 00:08:58.921 --rc geninfo_unexecuted_blocks=1 00:08:58.921 00:08:58.921 ' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.921 --rc genhtml_branch_coverage=1 00:08:58.921 --rc genhtml_function_coverage=1 00:08:58.921 --rc genhtml_legend=1 00:08:58.921 --rc geninfo_all_blocks=1 00:08:58.921 --rc geninfo_unexecuted_blocks=1 00:08:58.921 00:08:58.921 ' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.921 --rc genhtml_branch_coverage=1 00:08:58.921 --rc genhtml_function_coverage=1 00:08:58.921 --rc genhtml_legend=1 00:08:58.921 --rc geninfo_all_blocks=1 00:08:58.921 --rc geninfo_unexecuted_blocks=1 00:08:58.921 00:08:58.921 ' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.921 --rc genhtml_branch_coverage=1 00:08:58.921 --rc genhtml_function_coverage=1 00:08:58.921 --rc genhtml_legend=1 00:08:58.921 --rc geninfo_all_blocks=1 00:08:58.921 --rc geninfo_unexecuted_blocks=1 00:08:58.921 00:08:58.921 ' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.921 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:58.921 ************************************ 00:08:58.921 START TEST dd_inflate_file 00:08:58.922 ************************************ 00:08:58.922 20:36:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:59.179 [2024-11-26 20:36:53.940150] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:59.179 [2024-11-26 20:36:53.940525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:08:59.179 [2024-11-26 20:36:54.097677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.437 [2024-11-26 20:36:54.189584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.437 [2024-11-26 20:36:54.276859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.437  [2024-11-26T20:36:54.687Z] Copying: 64/64 [MB] (average 1523 MBps) 00:08:59.694 00:08:59.694 00:08:59.694 real 0m0.796s 00:08:59.694 user 0m0.471s 00:08:59.694 sys 0m0.424s 00:08:59.694 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.694 ************************************ 00:08:59.694 END TEST dd_inflate_file 00:08:59.694 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:59.694 ************************************ 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:59.952 ************************************ 00:08:59.952 START TEST dd_copy_to_out_bdev 00:08:59.952 ************************************ 00:08:59.952 20:36:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:59.952 { 00:08:59.952 "subsystems": [ 00:08:59.952 { 00:08:59.952 "subsystem": "bdev", 00:08:59.952 "config": [ 00:08:59.952 { 00:08:59.952 "params": { 00:08:59.952 "trtype": "pcie", 00:08:59.952 "traddr": "0000:00:10.0", 00:08:59.952 "name": "Nvme0" 00:08:59.952 }, 00:08:59.952 "method": "bdev_nvme_attach_controller" 00:08:59.952 }, 00:08:59.952 { 00:08:59.952 "params": { 00:08:59.952 "trtype": "pcie", 00:08:59.952 "traddr": "0000:00:11.0", 00:08:59.952 "name": "Nvme1" 00:08:59.952 }, 00:08:59.952 "method": "bdev_nvme_attach_controller" 00:08:59.952 }, 00:08:59.952 { 00:08:59.952 "method": "bdev_wait_for_examine" 00:08:59.952 } 00:08:59.952 ] 00:08:59.952 } 00:08:59.952 ] 00:08:59.952 } 00:08:59.952 [2024-11-26 20:36:54.790391] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:59.952 [2024-11-26 20:36:54.790474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61150 ] 00:08:59.952 [2024-11-26 20:36:54.932677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.210 [2024-11-26 20:36:55.014066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.210 [2024-11-26 20:36:55.097684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.210  [2024-11-26T20:36:56.770Z] Copying: 64/64 [MB] (average 76 MBps) 00:09:01.777 00:09:01.777 00:09:01.777 real 0m1.809s 00:09:01.777 user 0m1.460s 00:09:01.777 sys 0m1.468s 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.777 ************************************ 00:09:01.777 END TEST dd_copy_to_out_bdev 00:09:01.777 ************************************ 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:01.777 ************************************ 00:09:01.777 START TEST dd_offset_magic 00:09:01.777 ************************************ 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:01.777 20:36:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:01.777 [2024-11-26 20:36:56.665447] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:01.777 [2024-11-26 20:36:56.665772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61195 ] 00:09:01.777 { 00:09:01.777 "subsystems": [ 00:09:01.777 { 00:09:01.777 "subsystem": "bdev", 00:09:01.777 "config": [ 00:09:01.777 { 00:09:01.777 "params": { 00:09:01.777 "trtype": "pcie", 00:09:01.777 "traddr": "0000:00:10.0", 00:09:01.777 "name": "Nvme0" 00:09:01.777 }, 00:09:01.777 "method": "bdev_nvme_attach_controller" 00:09:01.777 }, 00:09:01.777 { 00:09:01.777 "params": { 00:09:01.777 "trtype": "pcie", 00:09:01.777 "traddr": "0000:00:11.0", 00:09:01.777 "name": "Nvme1" 00:09:01.777 }, 00:09:01.777 "method": "bdev_nvme_attach_controller" 00:09:01.777 }, 00:09:01.777 { 00:09:01.777 "method": "bdev_wait_for_examine" 00:09:01.777 } 00:09:01.777 ] 00:09:01.777 } 00:09:01.777 ] 00:09:01.777 } 00:09:02.037 [2024-11-26 20:36:56.811146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.037 [2024-11-26 20:36:56.895704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.037 [2024-11-26 20:36:56.976902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.297  [2024-11-26T20:36:57.858Z] Copying: 65/65 [MB] (average 1031 MBps) 00:09:02.865 00:09:02.865 20:36:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:02.865 20:36:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:02.865 20:36:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:02.865 20:36:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:02.865 { 00:09:02.865 "subsystems": [ 00:09:02.865 { 00:09:02.865 "subsystem": "bdev", 00:09:02.865 "config": [ 00:09:02.865 { 00:09:02.865 "params": { 00:09:02.865 "trtype": "pcie", 00:09:02.865 "traddr": "0000:00:10.0", 00:09:02.865 "name": "Nvme0" 00:09:02.865 }, 00:09:02.865 "method": "bdev_nvme_attach_controller" 00:09:02.865 }, 00:09:02.865 { 00:09:02.865 "params": { 00:09:02.865 "trtype": "pcie", 00:09:02.865 "traddr": "0000:00:11.0", 00:09:02.865 "name": "Nvme1" 00:09:02.865 }, 00:09:02.865 "method": "bdev_nvme_attach_controller" 00:09:02.865 }, 00:09:02.865 { 00:09:02.865 "method": "bdev_wait_for_examine" 00:09:02.865 } 00:09:02.865 ] 00:09:02.865 } 00:09:02.865 ] 00:09:02.865 } 00:09:02.865 [2024-11-26 20:36:57.617236] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:02.865 [2024-11-26 20:36:57.617343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61215 ] 00:09:02.865 [2024-11-26 20:36:57.766501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.866 [2024-11-26 20:36:57.845681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.124 [2024-11-26 20:36:57.929296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.382  [2024-11-26T20:36:58.633Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:03.640 00:09:03.640 20:36:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:03.640 20:36:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:03.640 20:36:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:03.640 20:36:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:03.640 20:36:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:03.640 20:36:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:03.640 20:36:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:03.640 { 00:09:03.640 "subsystems": [ 00:09:03.640 { 00:09:03.640 "subsystem": "bdev", 00:09:03.640 "config": [ 00:09:03.640 { 00:09:03.640 "params": { 00:09:03.640 "trtype": "pcie", 00:09:03.640 "traddr": "0000:00:10.0", 00:09:03.640 "name": "Nvme0" 00:09:03.640 }, 00:09:03.640 "method": "bdev_nvme_attach_controller" 00:09:03.640 }, 00:09:03.640 { 00:09:03.640 "params": { 00:09:03.640 "trtype": "pcie", 00:09:03.640 "traddr": "0000:00:11.0", 00:09:03.640 "name": "Nvme1" 00:09:03.640 }, 00:09:03.640 "method": "bdev_nvme_attach_controller" 00:09:03.640 }, 00:09:03.640 { 00:09:03.640 "method": "bdev_wait_for_examine" 00:09:03.640 } 00:09:03.640 ] 00:09:03.640 } 00:09:03.640 ] 00:09:03.640 } 00:09:03.640 [2024-11-26 20:36:58.470961] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:03.640 [2024-11-26 20:36:58.471075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61237 ] 00:09:03.640 [2024-11-26 20:36:58.624271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.900 [2024-11-26 20:36:58.706459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.900 [2024-11-26 20:36:58.788604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.158  [2024-11-26T20:36:59.428Z] Copying: 65/65 [MB] (average 1120 MBps) 00:09:04.435 00:09:04.435 20:36:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:04.435 20:36:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:04.435 20:36:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:04.435 20:36:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:04.697 [2024-11-26 20:36:59.448429] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:04.697 [2024-11-26 20:36:59.448548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61257 ] 00:09:04.697 { 00:09:04.697 "subsystems": [ 00:09:04.697 { 00:09:04.697 "subsystem": "bdev", 00:09:04.697 "config": [ 00:09:04.697 { 00:09:04.697 "params": { 00:09:04.697 "trtype": "pcie", 00:09:04.697 "traddr": "0000:00:10.0", 00:09:04.697 "name": "Nvme0" 00:09:04.697 }, 00:09:04.697 "method": "bdev_nvme_attach_controller" 00:09:04.697 }, 00:09:04.697 { 00:09:04.697 "params": { 00:09:04.697 "trtype": "pcie", 00:09:04.697 "traddr": "0000:00:11.0", 00:09:04.697 "name": "Nvme1" 00:09:04.697 }, 00:09:04.697 "method": "bdev_nvme_attach_controller" 00:09:04.697 }, 00:09:04.697 { 00:09:04.697 "method": "bdev_wait_for_examine" 00:09:04.697 } 00:09:04.697 ] 00:09:04.697 } 00:09:04.697 ] 00:09:04.697 } 00:09:04.697 [2024-11-26 20:36:59.603270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.697 [2024-11-26 20:36:59.687008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.955 [2024-11-26 20:36:59.769766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.214  [2024-11-26T20:37:00.466Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:05.473 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:05.473 00:09:05.473 real 0m3.648s 00:09:05.473 user 0m2.516s 00:09:05.473 sys 0m1.311s 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.473 ************************************ 00:09:05.473 END TEST dd_offset_magic 00:09:05.473 ************************************ 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:05.473 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:05.474 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:05.474 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:05.474 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:05.474 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:05.474 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:05.474 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:05.474 20:37:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:05.474 { 00:09:05.474 "subsystems": [ 00:09:05.474 { 00:09:05.474 "subsystem": "bdev", 00:09:05.474 "config": [ 00:09:05.474 { 00:09:05.474 "params": { 00:09:05.474 "trtype": "pcie", 00:09:05.474 "traddr": "0000:00:10.0", 00:09:05.474 "name": "Nvme0" 00:09:05.474 }, 00:09:05.474 "method": "bdev_nvme_attach_controller" 00:09:05.474 }, 00:09:05.474 { 00:09:05.474 "params": { 00:09:05.474 "trtype": "pcie", 00:09:05.474 "traddr": "0000:00:11.0", 00:09:05.474 "name": "Nvme1" 00:09:05.474 }, 00:09:05.474 "method": "bdev_nvme_attach_controller" 00:09:05.474 }, 00:09:05.474 { 00:09:05.474 "method": "bdev_wait_for_examine" 00:09:05.474 } 00:09:05.474 ] 00:09:05.474 } 00:09:05.474 ] 00:09:05.474 } 00:09:05.474 [2024-11-26 20:37:00.391670] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:05.474 [2024-11-26 20:37:00.392123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61293 ] 00:09:05.732 [2024-11-26 20:37:00.556152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.732 [2024-11-26 20:37:00.647422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.990 [2024-11-26 20:37:00.734620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.990  [2024-11-26T20:37:01.242Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:09:06.249 00:09:06.249 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:06.249 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:06.249 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:06.249 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:06.249 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:06.249 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:06.507 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:06.507 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:06.507 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:06.507 20:37:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:06.507 [2024-11-26 20:37:01.296073] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:06.507 { 00:09:06.507 "subsystems": [ 00:09:06.507 { 00:09:06.507 "subsystem": "bdev", 00:09:06.507 "config": [ 00:09:06.507 { 00:09:06.507 "params": { 00:09:06.507 "trtype": "pcie", 00:09:06.507 "traddr": "0000:00:10.0", 00:09:06.507 "name": "Nvme0" 00:09:06.507 }, 00:09:06.507 "method": "bdev_nvme_attach_controller" 00:09:06.507 }, 00:09:06.507 { 00:09:06.507 "params": { 00:09:06.507 "trtype": "pcie", 00:09:06.507 "traddr": "0000:00:11.0", 00:09:06.507 "name": "Nvme1" 00:09:06.507 }, 00:09:06.507 "method": "bdev_nvme_attach_controller" 00:09:06.507 }, 00:09:06.507 { 00:09:06.507 "method": "bdev_wait_for_examine" 00:09:06.507 } 00:09:06.507 ] 00:09:06.507 } 00:09:06.507 ] 00:09:06.507 } 00:09:06.507 [2024-11-26 20:37:01.296430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61310 ] 00:09:06.507 [2024-11-26 20:37:01.451740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.765 [2024-11-26 20:37:01.542874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.765 [2024-11-26 20:37:01.631143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.060  [2024-11-26T20:37:02.311Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:09:07.318 00:09:07.318 20:37:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:07.318 00:09:07.318 real 0m8.510s 00:09:07.318 user 0m5.866s 00:09:07.318 sys 0m4.245s 00:09:07.318 ************************************ 00:09:07.318 END TEST spdk_dd_bdev_to_bdev 00:09:07.318 ************************************ 00:09:07.318 20:37:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.318 20:37:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:07.318 20:37:02 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:07.318 20:37:02 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:07.318 20:37:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.318 20:37:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.318 20:37:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:07.318 ************************************ 00:09:07.318 START TEST spdk_dd_uring 00:09:07.318 ************************************ 00:09:07.318 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:07.318 * Looking for test storage... 00:09:07.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.577 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.578 --rc genhtml_branch_coverage=1 00:09:07.578 --rc genhtml_function_coverage=1 00:09:07.578 --rc genhtml_legend=1 00:09:07.578 --rc geninfo_all_blocks=1 00:09:07.578 --rc geninfo_unexecuted_blocks=1 00:09:07.578 00:09:07.578 ' 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.578 --rc genhtml_branch_coverage=1 00:09:07.578 --rc genhtml_function_coverage=1 00:09:07.578 --rc genhtml_legend=1 00:09:07.578 --rc geninfo_all_blocks=1 00:09:07.578 --rc geninfo_unexecuted_blocks=1 00:09:07.578 00:09:07.578 ' 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.578 --rc genhtml_branch_coverage=1 00:09:07.578 --rc genhtml_function_coverage=1 00:09:07.578 --rc genhtml_legend=1 00:09:07.578 --rc geninfo_all_blocks=1 00:09:07.578 --rc geninfo_unexecuted_blocks=1 00:09:07.578 00:09:07.578 ' 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.578 --rc genhtml_branch_coverage=1 00:09:07.578 --rc genhtml_function_coverage=1 00:09:07.578 --rc genhtml_legend=1 00:09:07.578 --rc geninfo_all_blocks=1 00:09:07.578 --rc geninfo_unexecuted_blocks=1 00:09:07.578 00:09:07.578 ' 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:07.578 ************************************ 00:09:07.578 START TEST dd_uring_copy 00:09:07.578 ************************************ 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=ap7tva39dxemiolft6ge82vnrqphvcqi9bype4yp4lt1bwi1c2kcpsdpfi5olu4e7tvzfk6chqgybefo8hyziybfdv713znap31t1rx6c517d8h7ef8rwir393obibab54lxed4fv564dss1asui0lzon9n3s54bfluhmg2gpbefjui9luuigflc1v1j099erkgi7sp194wsyc2dmim6tqn28wftuqsjn15rjf46o9mjnf2lutvnx1sidekdp0ccgj7em2y6eznxcmwl6pnfncln4s50m4bcm5uq1n9us7nt8ufo5dyy7xjtymteb1v0chmjuvq8w0tvkccgft88p418my7006w7dttlgmez0halg425xu13b4p5w3mtky53dg9gm350ggto7vcqm6fkhatoy2c64nuz02l1wjn37v2ewd83v25fpb1q7lnv6ipq233bpqw6i5ffzki0miemmecoesmnep6z1ub7ziofjhyzyrtze0qq3iaiz1klasy7opugdik2lwcses2m61x3y7ginw8amyfova2gbxjz57fkhz4bj0fvax2kwf2usmb36h1gm8alqr6y2x3ikqkgokj12kki9l7khn8de5zk3o3sh6vpvfa4rt78qfe4lhovzieutnpa7tag8ose8zntht6rs2p1yxuz31zk8g5dtp88bkrl1rfanwidsrf0mxh9sq4ru41imyyfj36xf2j1kxsa3ikqun36z12ni9m3nb32bpl0r9fobgi3kdelv55o733bewsq74yyaqv6jlgq4yntkupddo8ec6as77vg67t6e8hcsp754l21s4okvj67eprzkagwsdpmoi2wzgvrku95vu6h6f6wqrq0rkg5z43ihmx8hho3ubcqmlwewl6iy0mvirnbwrxk4ti6j45luahlky38mf4td7ookh0tihh03v2s2au0o9aq0us58l06yn5zs64gvjfmocevbikmfllywcc575sjgz1shoufugwdhused3vsp0pawr0xt9bg 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo ap7tva39dxemiolft6ge82vnrqphvcqi9bype4yp4lt1bwi1c2kcpsdpfi5olu4e7tvzfk6chqgybefo8hyziybfdv713znap31t1rx6c517d8h7ef8rwir393obibab54lxed4fv564dss1asui0lzon9n3s54bfluhmg2gpbefjui9luuigflc1v1j099erkgi7sp194wsyc2dmim6tqn28wftuqsjn15rjf46o9mjnf2lutvnx1sidekdp0ccgj7em2y6eznxcmwl6pnfncln4s50m4bcm5uq1n9us7nt8ufo5dyy7xjtymteb1v0chmjuvq8w0tvkccgft88p418my7006w7dttlgmez0halg425xu13b4p5w3mtky53dg9gm350ggto7vcqm6fkhatoy2c64nuz02l1wjn37v2ewd83v25fpb1q7lnv6ipq233bpqw6i5ffzki0miemmecoesmnep6z1ub7ziofjhyzyrtze0qq3iaiz1klasy7opugdik2lwcses2m61x3y7ginw8amyfova2gbxjz57fkhz4bj0fvax2kwf2usmb36h1gm8alqr6y2x3ikqkgokj12kki9l7khn8de5zk3o3sh6vpvfa4rt78qfe4lhovzieutnpa7tag8ose8zntht6rs2p1yxuz31zk8g5dtp88bkrl1rfanwidsrf0mxh9sq4ru41imyyfj36xf2j1kxsa3ikqun36z12ni9m3nb32bpl0r9fobgi3kdelv55o733bewsq74yyaqv6jlgq4yntkupddo8ec6as77vg67t6e8hcsp754l21s4okvj67eprzkagwsdpmoi2wzgvrku95vu6h6f6wqrq0rkg5z43ihmx8hho3ubcqmlwewl6iy0mvirnbwrxk4ti6j45luahlky38mf4td7ookh0tihh03v2s2au0o9aq0us58l06yn5zs64gvjfmocevbikmfllywcc575sjgz1shoufugwdhused3vsp0pawr0xt9bg 00:09:07.578 20:37:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:07.578 [2024-11-26 20:37:02.557998] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:07.578 [2024-11-26 20:37:02.558394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61392 ] 00:09:07.836 [2024-11-26 20:37:02.719741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.836 [2024-11-26 20:37:02.812873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.094 [2024-11-26 20:37:02.900135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.660  [2024-11-26T20:37:04.219Z] Copying: 511/511 [MB] (average 1179 MBps) 00:09:09.226 00:09:09.226 20:37:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:09.226 20:37:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:09.226 20:37:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:09.226 20:37:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:09.226 [2024-11-26 20:37:04.167283] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:09.226 [2024-11-26 20:37:04.167388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61415 ] 00:09:09.226 { 00:09:09.226 "subsystems": [ 00:09:09.226 { 00:09:09.226 "subsystem": "bdev", 00:09:09.226 "config": [ 00:09:09.226 { 00:09:09.226 "params": { 00:09:09.226 "block_size": 512, 00:09:09.226 "num_blocks": 1048576, 00:09:09.226 "name": "malloc0" 00:09:09.226 }, 00:09:09.226 "method": "bdev_malloc_create" 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "params": { 00:09:09.226 "filename": "/dev/zram1", 00:09:09.226 "name": "uring0" 00:09:09.226 }, 00:09:09.226 "method": "bdev_uring_create" 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "method": "bdev_wait_for_examine" 00:09:09.226 } 00:09:09.226 ] 00:09:09.226 } 00:09:09.226 ] 00:09:09.226 } 00:09:09.483 [2024-11-26 20:37:04.315953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.483 [2024-11-26 20:37:04.400998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.483 [2024-11-26 20:37:04.447621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.860  [2024-11-26T20:37:06.788Z] Copying: 213/512 [MB] (213 MBps) [2024-11-26T20:37:07.355Z] Copying: 405/512 [MB] (191 MBps) [2024-11-26T20:37:07.922Z] Copying: 512/512 [MB] (average 200 MBps) 00:09:12.929 00:09:12.929 20:37:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:12.930 20:37:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:12.930 20:37:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:12.930 20:37:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:12.930 { 00:09:12.930 "subsystems": [ 00:09:12.930 { 00:09:12.930 "subsystem": "bdev", 00:09:12.930 "config": [ 00:09:12.930 { 00:09:12.930 "params": { 00:09:12.930 "block_size": 512, 00:09:12.930 "num_blocks": 1048576, 00:09:12.930 "name": "malloc0" 00:09:12.930 }, 00:09:12.930 "method": "bdev_malloc_create" 00:09:12.930 }, 00:09:12.930 { 00:09:12.930 "params": { 00:09:12.930 "filename": "/dev/zram1", 00:09:12.930 "name": "uring0" 00:09:12.930 }, 00:09:12.930 "method": "bdev_uring_create" 00:09:12.930 }, 00:09:12.930 { 00:09:12.930 "method": "bdev_wait_for_examine" 00:09:12.930 } 00:09:12.930 ] 00:09:12.930 } 00:09:12.930 ] 00:09:12.930 } 00:09:12.930 [2024-11-26 20:37:07.845945] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:12.930 [2024-11-26 20:37:07.846059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61462 ] 00:09:13.188 [2024-11-26 20:37:08.008916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.189 [2024-11-26 20:37:08.101185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.189 [2024-11-26 20:37:08.151731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.652  [2024-11-26T20:37:10.579Z] Copying: 139/512 [MB] (139 MBps) [2024-11-26T20:37:11.514Z] Copying: 269/512 [MB] (130 MBps) [2024-11-26T20:37:12.501Z] Copying: 407/512 [MB] (137 MBps) [2024-11-26T20:37:13.067Z] Copying: 512/512 [MB] (average 135 MBps) 00:09:18.074 00:09:18.074 20:37:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:18.074 20:37:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ ap7tva39dxemiolft6ge82vnrqphvcqi9bype4yp4lt1bwi1c2kcpsdpfi5olu4e7tvzfk6chqgybefo8hyziybfdv713znap31t1rx6c517d8h7ef8rwir393obibab54lxed4fv564dss1asui0lzon9n3s54bfluhmg2gpbefjui9luuigflc1v1j099erkgi7sp194wsyc2dmim6tqn28wftuqsjn15rjf46o9mjnf2lutvnx1sidekdp0ccgj7em2y6eznxcmwl6pnfncln4s50m4bcm5uq1n9us7nt8ufo5dyy7xjtymteb1v0chmjuvq8w0tvkccgft88p418my7006w7dttlgmez0halg425xu13b4p5w3mtky53dg9gm350ggto7vcqm6fkhatoy2c64nuz02l1wjn37v2ewd83v25fpb1q7lnv6ipq233bpqw6i5ffzki0miemmecoesmnep6z1ub7ziofjhyzyrtze0qq3iaiz1klasy7opugdik2lwcses2m61x3y7ginw8amyfova2gbxjz57fkhz4bj0fvax2kwf2usmb36h1gm8alqr6y2x3ikqkgokj12kki9l7khn8de5zk3o3sh6vpvfa4rt78qfe4lhovzieutnpa7tag8ose8zntht6rs2p1yxuz31zk8g5dtp88bkrl1rfanwidsrf0mxh9sq4ru41imyyfj36xf2j1kxsa3ikqun36z12ni9m3nb32bpl0r9fobgi3kdelv55o733bewsq74yyaqv6jlgq4yntkupddo8ec6as77vg67t6e8hcsp754l21s4okvj67eprzkagwsdpmoi2wzgvrku95vu6h6f6wqrq0rkg5z43ihmx8hho3ubcqmlwewl6iy0mvirnbwrxk4ti6j45luahlky38mf4td7ookh0tihh03v2s2au0o9aq0us58l06yn5zs64gvjfmocevbikmfllywcc575sjgz1shoufugwdhused3vsp0pawr0xt9bg == \a\p\7\t\v\a\3\9\d\x\e\m\i\o\l\f\t\6\g\e\8\2\v\n\r\q\p\h\v\c\q\i\9\b\y\p\e\4\y\p\4\l\t\1\b\w\i\1\c\2\k\c\p\s\d\p\f\i\5\o\l\u\4\e\7\t\v\z\f\k\6\c\h\q\g\y\b\e\f\o\8\h\y\z\i\y\b\f\d\v\7\1\3\z\n\a\p\3\1\t\1\r\x\6\c\5\1\7\d\8\h\7\e\f\8\r\w\i\r\3\9\3\o\b\i\b\a\b\5\4\l\x\e\d\4\f\v\5\6\4\d\s\s\1\a\s\u\i\0\l\z\o\n\9\n\3\s\5\4\b\f\l\u\h\m\g\2\g\p\b\e\f\j\u\i\9\l\u\u\i\g\f\l\c\1\v\1\j\0\9\9\e\r\k\g\i\7\s\p\1\9\4\w\s\y\c\2\d\m\i\m\6\t\q\n\2\8\w\f\t\u\q\s\j\n\1\5\r\j\f\4\6\o\9\m\j\n\f\2\l\u\t\v\n\x\1\s\i\d\e\k\d\p\0\c\c\g\j\7\e\m\2\y\6\e\z\n\x\c\m\w\l\6\p\n\f\n\c\l\n\4\s\5\0\m\4\b\c\m\5\u\q\1\n\9\u\s\7\n\t\8\u\f\o\5\d\y\y\7\x\j\t\y\m\t\e\b\1\v\0\c\h\m\j\u\v\q\8\w\0\t\v\k\c\c\g\f\t\8\8\p\4\1\8\m\y\7\0\0\6\w\7\d\t\t\l\g\m\e\z\0\h\a\l\g\4\2\5\x\u\1\3\b\4\p\5\w\3\m\t\k\y\5\3\d\g\9\g\m\3\5\0\g\g\t\o\7\v\c\q\m\6\f\k\h\a\t\o\y\2\c\6\4\n\u\z\0\2\l\1\w\j\n\3\7\v\2\e\w\d\8\3\v\2\5\f\p\b\1\q\7\l\n\v\6\i\p\q\2\3\3\b\p\q\w\6\i\5\f\f\z\k\i\0\m\i\e\m\m\e\c\o\e\s\m\n\e\p\6\z\1\u\b\7\z\i\o\f\j\h\y\z\y\r\t\z\e\0\q\q\3\i\a\i\z\1\k\l\a\s\y\7\o\p\u\g\d\i\k\2\l\w\c\s\e\s\2\m\6\1\x\3\y\7\g\i\n\w\8\a\m\y\f\o\v\a\2\g\b\x\j\z\5\7\f\k\h\z\4\b\j\0\f\v\a\x\2\k\w\f\2\u\s\m\b\3\6\h\1\g\m\8\a\l\q\r\6\y\2\x\3\i\k\q\k\g\o\k\j\1\2\k\k\i\9\l\7\k\h\n\8\d\e\5\z\k\3\o\3\s\h\6\v\p\v\f\a\4\r\t\7\8\q\f\e\4\l\h\o\v\z\i\e\u\t\n\p\a\7\t\a\g\8\o\s\e\8\z\n\t\h\t\6\r\s\2\p\1\y\x\u\z\3\1\z\k\8\g\5\d\t\p\8\8\b\k\r\l\1\r\f\a\n\w\i\d\s\r\f\0\m\x\h\9\s\q\4\r\u\4\1\i\m\y\y\f\j\3\6\x\f\2\j\1\k\x\s\a\3\i\k\q\u\n\3\6\z\1\2\n\i\9\m\3\n\b\3\2\b\p\l\0\r\9\f\o\b\g\i\3\k\d\e\l\v\5\5\o\7\3\3\b\e\w\s\q\7\4\y\y\a\q\v\6\j\l\g\q\4\y\n\t\k\u\p\d\d\o\8\e\c\6\a\s\7\7\v\g\6\7\t\6\e\8\h\c\s\p\7\5\4\l\2\1\s\4\o\k\v\j\6\7\e\p\r\z\k\a\g\w\s\d\p\m\o\i\2\w\z\g\v\r\k\u\9\5\v\u\6\h\6\f\6\w\q\r\q\0\r\k\g\5\z\4\3\i\h\m\x\8\h\h\o\3\u\b\c\q\m\l\w\e\w\l\6\i\y\0\m\v\i\r\n\b\w\r\x\k\4\t\i\6\j\4\5\l\u\a\h\l\k\y\3\8\m\f\4\t\d\7\o\o\k\h\0\t\i\h\h\0\3\v\2\s\2\a\u\0\o\9\a\q\0\u\s\5\8\l\0\6\y\n\5\z\s\6\4\g\v\j\f\m\o\c\e\v\b\i\k\m\f\l\l\y\w\c\c\5\7\5\s\j\g\z\1\s\h\o\u\f\u\g\w\d\h\u\s\e\d\3\v\s\p\0\p\a\w\r\0\x\t\9\b\g ]] 00:09:18.074 20:37:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:18.075 20:37:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ ap7tva39dxemiolft6ge82vnrqphvcqi9bype4yp4lt1bwi1c2kcpsdpfi5olu4e7tvzfk6chqgybefo8hyziybfdv713znap31t1rx6c517d8h7ef8rwir393obibab54lxed4fv564dss1asui0lzon9n3s54bfluhmg2gpbefjui9luuigflc1v1j099erkgi7sp194wsyc2dmim6tqn28wftuqsjn15rjf46o9mjnf2lutvnx1sidekdp0ccgj7em2y6eznxcmwl6pnfncln4s50m4bcm5uq1n9us7nt8ufo5dyy7xjtymteb1v0chmjuvq8w0tvkccgft88p418my7006w7dttlgmez0halg425xu13b4p5w3mtky53dg9gm350ggto7vcqm6fkhatoy2c64nuz02l1wjn37v2ewd83v25fpb1q7lnv6ipq233bpqw6i5ffzki0miemmecoesmnep6z1ub7ziofjhyzyrtze0qq3iaiz1klasy7opugdik2lwcses2m61x3y7ginw8amyfova2gbxjz57fkhz4bj0fvax2kwf2usmb36h1gm8alqr6y2x3ikqkgokj12kki9l7khn8de5zk3o3sh6vpvfa4rt78qfe4lhovzieutnpa7tag8ose8zntht6rs2p1yxuz31zk8g5dtp88bkrl1rfanwidsrf0mxh9sq4ru41imyyfj36xf2j1kxsa3ikqun36z12ni9m3nb32bpl0r9fobgi3kdelv55o733bewsq74yyaqv6jlgq4yntkupddo8ec6as77vg67t6e8hcsp754l21s4okvj67eprzkagwsdpmoi2wzgvrku95vu6h6f6wqrq0rkg5z43ihmx8hho3ubcqmlwewl6iy0mvirnbwrxk4ti6j45luahlky38mf4td7ookh0tihh03v2s2au0o9aq0us58l06yn5zs64gvjfmocevbikmfllywcc575sjgz1shoufugwdhused3vsp0pawr0xt9bg == \a\p\7\t\v\a\3\9\d\x\e\m\i\o\l\f\t\6\g\e\8\2\v\n\r\q\p\h\v\c\q\i\9\b\y\p\e\4\y\p\4\l\t\1\b\w\i\1\c\2\k\c\p\s\d\p\f\i\5\o\l\u\4\e\7\t\v\z\f\k\6\c\h\q\g\y\b\e\f\o\8\h\y\z\i\y\b\f\d\v\7\1\3\z\n\a\p\3\1\t\1\r\x\6\c\5\1\7\d\8\h\7\e\f\8\r\w\i\r\3\9\3\o\b\i\b\a\b\5\4\l\x\e\d\4\f\v\5\6\4\d\s\s\1\a\s\u\i\0\l\z\o\n\9\n\3\s\5\4\b\f\l\u\h\m\g\2\g\p\b\e\f\j\u\i\9\l\u\u\i\g\f\l\c\1\v\1\j\0\9\9\e\r\k\g\i\7\s\p\1\9\4\w\s\y\c\2\d\m\i\m\6\t\q\n\2\8\w\f\t\u\q\s\j\n\1\5\r\j\f\4\6\o\9\m\j\n\f\2\l\u\t\v\n\x\1\s\i\d\e\k\d\p\0\c\c\g\j\7\e\m\2\y\6\e\z\n\x\c\m\w\l\6\p\n\f\n\c\l\n\4\s\5\0\m\4\b\c\m\5\u\q\1\n\9\u\s\7\n\t\8\u\f\o\5\d\y\y\7\x\j\t\y\m\t\e\b\1\v\0\c\h\m\j\u\v\q\8\w\0\t\v\k\c\c\g\f\t\8\8\p\4\1\8\m\y\7\0\0\6\w\7\d\t\t\l\g\m\e\z\0\h\a\l\g\4\2\5\x\u\1\3\b\4\p\5\w\3\m\t\k\y\5\3\d\g\9\g\m\3\5\0\g\g\t\o\7\v\c\q\m\6\f\k\h\a\t\o\y\2\c\6\4\n\u\z\0\2\l\1\w\j\n\3\7\v\2\e\w\d\8\3\v\2\5\f\p\b\1\q\7\l\n\v\6\i\p\q\2\3\3\b\p\q\w\6\i\5\f\f\z\k\i\0\m\i\e\m\m\e\c\o\e\s\m\n\e\p\6\z\1\u\b\7\z\i\o\f\j\h\y\z\y\r\t\z\e\0\q\q\3\i\a\i\z\1\k\l\a\s\y\7\o\p\u\g\d\i\k\2\l\w\c\s\e\s\2\m\6\1\x\3\y\7\g\i\n\w\8\a\m\y\f\o\v\a\2\g\b\x\j\z\5\7\f\k\h\z\4\b\j\0\f\v\a\x\2\k\w\f\2\u\s\m\b\3\6\h\1\g\m\8\a\l\q\r\6\y\2\x\3\i\k\q\k\g\o\k\j\1\2\k\k\i\9\l\7\k\h\n\8\d\e\5\z\k\3\o\3\s\h\6\v\p\v\f\a\4\r\t\7\8\q\f\e\4\l\h\o\v\z\i\e\u\t\n\p\a\7\t\a\g\8\o\s\e\8\z\n\t\h\t\6\r\s\2\p\1\y\x\u\z\3\1\z\k\8\g\5\d\t\p\8\8\b\k\r\l\1\r\f\a\n\w\i\d\s\r\f\0\m\x\h\9\s\q\4\r\u\4\1\i\m\y\y\f\j\3\6\x\f\2\j\1\k\x\s\a\3\i\k\q\u\n\3\6\z\1\2\n\i\9\m\3\n\b\3\2\b\p\l\0\r\9\f\o\b\g\i\3\k\d\e\l\v\5\5\o\7\3\3\b\e\w\s\q\7\4\y\y\a\q\v\6\j\l\g\q\4\y\n\t\k\u\p\d\d\o\8\e\c\6\a\s\7\7\v\g\6\7\t\6\e\8\h\c\s\p\7\5\4\l\2\1\s\4\o\k\v\j\6\7\e\p\r\z\k\a\g\w\s\d\p\m\o\i\2\w\z\g\v\r\k\u\9\5\v\u\6\h\6\f\6\w\q\r\q\0\r\k\g\5\z\4\3\i\h\m\x\8\h\h\o\3\u\b\c\q\m\l\w\e\w\l\6\i\y\0\m\v\i\r\n\b\w\r\x\k\4\t\i\6\j\4\5\l\u\a\h\l\k\y\3\8\m\f\4\t\d\7\o\o\k\h\0\t\i\h\h\0\3\v\2\s\2\a\u\0\o\9\a\q\0\u\s\5\8\l\0\6\y\n\5\z\s\6\4\g\v\j\f\m\o\c\e\v\b\i\k\m\f\l\l\y\w\c\c\5\7\5\s\j\g\z\1\s\h\o\u\f\u\g\w\d\h\u\s\e\d\3\v\s\p\0\p\a\w\r\0\x\t\9\b\g ]] 00:09:18.075 20:37:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:18.640 20:37:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:18.640 20:37:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:18.640 20:37:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:18.640 20:37:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:18.640 { 00:09:18.640 "subsystems": [ 00:09:18.640 { 00:09:18.640 "subsystem": "bdev", 00:09:18.640 "config": [ 00:09:18.640 { 00:09:18.640 "params": { 00:09:18.641 "block_size": 512, 00:09:18.641 "num_blocks": 1048576, 00:09:18.641 "name": "malloc0" 00:09:18.641 }, 00:09:18.641 "method": "bdev_malloc_create" 00:09:18.641 }, 00:09:18.641 { 00:09:18.641 "params": { 00:09:18.641 "filename": "/dev/zram1", 00:09:18.641 "name": "uring0" 00:09:18.641 }, 00:09:18.641 "method": "bdev_uring_create" 00:09:18.641 }, 00:09:18.641 { 00:09:18.641 "method": "bdev_wait_for_examine" 00:09:18.641 } 00:09:18.641 ] 00:09:18.641 } 00:09:18.641 ] 00:09:18.641 } 00:09:18.641 [2024-11-26 20:37:13.497361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:18.641 [2024-11-26 20:37:13.497477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61557 ] 00:09:18.898 [2024-11-26 20:37:13.652417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.898 [2024-11-26 20:37:13.749228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.898 [2024-11-26 20:37:13.802640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.274  [2024-11-26T20:37:16.199Z] Copying: 150/512 [MB] (150 MBps) [2024-11-26T20:37:17.134Z] Copying: 302/512 [MB] (151 MBps) [2024-11-26T20:37:17.701Z] Copying: 451/512 [MB] (149 MBps) [2024-11-26T20:37:17.960Z] Copying: 512/512 [MB] (average 151 MBps) 00:09:22.967 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:22.967 20:37:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:23.224 { 00:09:23.224 "subsystems": [ 00:09:23.224 { 00:09:23.224 "subsystem": "bdev", 00:09:23.224 "config": [ 00:09:23.224 { 00:09:23.224 "params": { 00:09:23.224 "block_size": 512, 00:09:23.224 "num_blocks": 1048576, 00:09:23.224 "name": "malloc0" 00:09:23.224 }, 00:09:23.224 "method": "bdev_malloc_create" 00:09:23.224 }, 00:09:23.224 { 00:09:23.224 "params": { 00:09:23.224 "filename": "/dev/zram1", 00:09:23.225 "name": "uring0" 00:09:23.225 }, 00:09:23.225 "method": "bdev_uring_create" 00:09:23.225 }, 00:09:23.225 { 00:09:23.225 "params": { 00:09:23.225 "name": "uring0" 00:09:23.225 }, 00:09:23.225 "method": "bdev_uring_delete" 00:09:23.225 }, 00:09:23.225 { 00:09:23.225 "method": "bdev_wait_for_examine" 00:09:23.225 } 00:09:23.225 ] 00:09:23.225 } 00:09:23.225 ] 00:09:23.225 } 00:09:23.225 [2024-11-26 20:37:18.016267] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:23.225 [2024-11-26 20:37:18.016381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61613 ] 00:09:23.225 [2024-11-26 20:37:18.168437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.482 [2024-11-26 20:37:18.248357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.482 [2024-11-26 20:37:18.292087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.780  [2024-11-26T20:37:19.055Z] Copying: 0/0 [B] (average 0 Bps) 00:09:24.062 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:24.062 20:37:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:24.320 [2024-11-26 20:37:19.100516] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:24.320 [2024-11-26 20:37:19.100885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61642 ] 00:09:24.320 { 00:09:24.320 "subsystems": [ 00:09:24.320 { 00:09:24.320 "subsystem": "bdev", 00:09:24.320 "config": [ 00:09:24.320 { 00:09:24.320 "params": { 00:09:24.320 "block_size": 512, 00:09:24.320 "num_blocks": 1048576, 00:09:24.320 "name": "malloc0" 00:09:24.320 }, 00:09:24.320 "method": "bdev_malloc_create" 00:09:24.320 }, 00:09:24.320 { 00:09:24.320 "params": { 00:09:24.320 "filename": "/dev/zram1", 00:09:24.320 "name": "uring0" 00:09:24.320 }, 00:09:24.320 "method": "bdev_uring_create" 00:09:24.320 }, 00:09:24.320 { 00:09:24.320 "params": { 00:09:24.320 "name": "uring0" 00:09:24.320 }, 00:09:24.320 "method": "bdev_uring_delete" 00:09:24.320 }, 00:09:24.320 { 00:09:24.320 "method": "bdev_wait_for_examine" 00:09:24.320 } 00:09:24.320 ] 00:09:24.320 } 00:09:24.320 ] 00:09:24.320 } 00:09:24.320 [2024-11-26 20:37:19.253920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.578 [2024-11-26 20:37:19.333949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.578 [2024-11-26 20:37:19.378524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.837 [2024-11-26 20:37:19.644265] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:24.837 [2024-11-26 20:37:19.644366] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:24.837 [2024-11-26 20:37:19.644388] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:24.837 [2024-11-26 20:37:19.644411] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:25.094 [2024-11-26 20:37:20.053503] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:25.352 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:25.918 00:09:25.918 real 0m18.145s 00:09:25.918 user 0m11.903s 00:09:25.918 sys 0m16.270s 00:09:25.918 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.918 20:37:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:25.918 ************************************ 00:09:25.918 END TEST dd_uring_copy 00:09:25.918 ************************************ 00:09:25.918 ************************************ 00:09:25.918 END TEST spdk_dd_uring 00:09:25.918 ************************************ 00:09:25.918 00:09:25.918 real 0m18.422s 00:09:25.918 user 0m12.062s 00:09:25.918 sys 0m16.397s 00:09:25.918 20:37:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.918 20:37:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:25.918 20:37:20 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:25.918 20:37:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.918 20:37:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.918 20:37:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:25.918 ************************************ 00:09:25.918 START TEST spdk_dd_sparse 00:09:25.918 ************************************ 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:25.918 * Looking for test storage... 00:09:25.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.918 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.919 --rc genhtml_branch_coverage=1 00:09:25.919 --rc genhtml_function_coverage=1 00:09:25.919 --rc genhtml_legend=1 00:09:25.919 --rc geninfo_all_blocks=1 00:09:25.919 --rc geninfo_unexecuted_blocks=1 00:09:25.919 00:09:25.919 ' 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.919 --rc genhtml_branch_coverage=1 00:09:25.919 --rc genhtml_function_coverage=1 00:09:25.919 --rc genhtml_legend=1 00:09:25.919 --rc geninfo_all_blocks=1 00:09:25.919 --rc geninfo_unexecuted_blocks=1 00:09:25.919 00:09:25.919 ' 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.919 --rc genhtml_branch_coverage=1 00:09:25.919 --rc genhtml_function_coverage=1 00:09:25.919 --rc genhtml_legend=1 00:09:25.919 --rc geninfo_all_blocks=1 00:09:25.919 --rc geninfo_unexecuted_blocks=1 00:09:25.919 00:09:25.919 ' 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.919 --rc genhtml_branch_coverage=1 00:09:25.919 --rc genhtml_function_coverage=1 00:09:25.919 --rc genhtml_legend=1 00:09:25.919 --rc geninfo_all_blocks=1 00:09:25.919 --rc geninfo_unexecuted_blocks=1 00:09:25.919 00:09:25.919 ' 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:25.919 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:26.178 1+0 records in 00:09:26.178 1+0 records out 00:09:26.178 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0106249 s, 395 MB/s 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:26.178 1+0 records in 00:09:26.178 1+0 records out 00:09:26.178 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0110854 s, 378 MB/s 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:26.178 1+0 records in 00:09:26.178 1+0 records out 00:09:26.178 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00611725 s, 686 MB/s 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:26.178 ************************************ 00:09:26.178 START TEST dd_sparse_file_to_file 00:09:26.178 ************************************ 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:26.178 20:37:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:26.178 [2024-11-26 20:37:21.004453] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:26.178 [2024-11-26 20:37:21.004745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61747 ] 00:09:26.178 { 00:09:26.178 "subsystems": [ 00:09:26.178 { 00:09:26.178 "subsystem": "bdev", 00:09:26.178 "config": [ 00:09:26.178 { 00:09:26.178 "params": { 00:09:26.178 "block_size": 4096, 00:09:26.178 "filename": "dd_sparse_aio_disk", 00:09:26.178 "name": "dd_aio" 00:09:26.178 }, 00:09:26.178 "method": "bdev_aio_create" 00:09:26.178 }, 00:09:26.178 { 00:09:26.178 "params": { 00:09:26.178 "lvs_name": "dd_lvstore", 00:09:26.178 "bdev_name": "dd_aio" 00:09:26.178 }, 00:09:26.178 "method": "bdev_lvol_create_lvstore" 00:09:26.178 }, 00:09:26.178 { 00:09:26.178 "method": "bdev_wait_for_examine" 00:09:26.178 } 00:09:26.178 ] 00:09:26.178 } 00:09:26.178 ] 00:09:26.178 } 00:09:26.178 [2024-11-26 20:37:21.160096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.437 [2024-11-26 20:37:21.250915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.437 [2024-11-26 20:37:21.301273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.437  [2024-11-26T20:37:21.688Z] Copying: 12/36 [MB] (average 500 MBps) 00:09:26.695 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:26.695 ************************************ 00:09:26.695 END TEST dd_sparse_file_to_file 00:09:26.695 ************************************ 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:26.695 00:09:26.695 real 0m0.721s 00:09:26.695 user 0m0.459s 00:09:26.695 sys 0m0.362s 00:09:26.695 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.696 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:26.954 ************************************ 00:09:26.954 START TEST dd_sparse_file_to_bdev 00:09:26.954 ************************************ 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:26.954 20:37:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.954 [2024-11-26 20:37:21.793149] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:26.954 [2024-11-26 20:37:21.793414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61794 ] 00:09:26.954 { 00:09:26.955 "subsystems": [ 00:09:26.955 { 00:09:26.955 "subsystem": "bdev", 00:09:26.955 "config": [ 00:09:26.955 { 00:09:26.955 "params": { 00:09:26.955 "block_size": 4096, 00:09:26.955 "filename": "dd_sparse_aio_disk", 00:09:26.955 "name": "dd_aio" 00:09:26.955 }, 00:09:26.955 "method": "bdev_aio_create" 00:09:26.955 }, 00:09:26.955 { 00:09:26.955 "params": { 00:09:26.955 "lvs_name": "dd_lvstore", 00:09:26.955 "lvol_name": "dd_lvol", 00:09:26.955 "size_in_mib": 36, 00:09:26.955 "thin_provision": true 00:09:26.955 }, 00:09:26.955 "method": "bdev_lvol_create" 00:09:26.955 }, 00:09:26.955 { 00:09:26.955 "method": "bdev_wait_for_examine" 00:09:26.955 } 00:09:26.955 ] 00:09:26.955 } 00:09:26.955 ] 00:09:26.955 } 00:09:26.955 [2024-11-26 20:37:21.944177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.278 [2024-11-26 20:37:22.034660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.278 [2024-11-26 20:37:22.090471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.278  [2024-11-26T20:37:22.531Z] Copying: 12/36 [MB] (average 176 MBps) 00:09:27.538 00:09:27.538 00:09:27.538 real 0m0.710s 00:09:27.538 user 0m0.453s 00:09:27.538 sys 0m0.397s 00:09:27.538 ************************************ 00:09:27.538 END TEST dd_sparse_file_to_bdev 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:27.538 ************************************ 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:27.538 ************************************ 00:09:27.538 START TEST dd_sparse_bdev_to_file 00:09:27.538 ************************************ 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:27.538 20:37:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:27.795 [2024-11-26 20:37:22.570558] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:27.795 [2024-11-26 20:37:22.570669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61828 ] 00:09:27.795 { 00:09:27.795 "subsystems": [ 00:09:27.795 { 00:09:27.795 "subsystem": "bdev", 00:09:27.795 "config": [ 00:09:27.795 { 00:09:27.795 "params": { 00:09:27.795 "block_size": 4096, 00:09:27.795 "filename": "dd_sparse_aio_disk", 00:09:27.795 "name": "dd_aio" 00:09:27.795 }, 00:09:27.795 "method": "bdev_aio_create" 00:09:27.795 }, 00:09:27.795 { 00:09:27.795 "method": "bdev_wait_for_examine" 00:09:27.795 } 00:09:27.795 ] 00:09:27.795 } 00:09:27.795 ] 00:09:27.795 } 00:09:27.795 [2024-11-26 20:37:22.732781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.054 [2024-11-26 20:37:22.822727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.054 [2024-11-26 20:37:22.872119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.054  [2024-11-26T20:37:23.306Z] Copying: 12/36 [MB] (average 750 MBps) 00:09:28.313 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:28.313 00:09:28.313 real 0m0.709s 00:09:28.313 user 0m0.442s 00:09:28.313 sys 0m0.371s 00:09:28.313 ************************************ 00:09:28.313 END TEST dd_sparse_bdev_to_file 00:09:28.313 ************************************ 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:28.313 20:37:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:28.573 ************************************ 00:09:28.573 END TEST spdk_dd_sparse 00:09:28.573 ************************************ 00:09:28.573 00:09:28.573 real 0m2.629s 00:09:28.573 user 0m1.550s 00:09:28.573 sys 0m1.416s 00:09:28.573 20:37:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.573 20:37:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:28.573 20:37:23 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:28.573 20:37:23 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.573 20:37:23 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.573 20:37:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:28.573 ************************************ 00:09:28.573 START TEST spdk_dd_negative 00:09:28.573 ************************************ 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:28.573 * Looking for test storage... 00:09:28.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:28.573 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.833 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.834 --rc genhtml_branch_coverage=1 00:09:28.834 --rc genhtml_function_coverage=1 00:09:28.834 --rc genhtml_legend=1 00:09:28.834 --rc geninfo_all_blocks=1 00:09:28.834 --rc geninfo_unexecuted_blocks=1 00:09:28.834 00:09:28.834 ' 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.834 --rc genhtml_branch_coverage=1 00:09:28.834 --rc genhtml_function_coverage=1 00:09:28.834 --rc genhtml_legend=1 00:09:28.834 --rc geninfo_all_blocks=1 00:09:28.834 --rc geninfo_unexecuted_blocks=1 00:09:28.834 00:09:28.834 ' 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.834 --rc genhtml_branch_coverage=1 00:09:28.834 --rc genhtml_function_coverage=1 00:09:28.834 --rc genhtml_legend=1 00:09:28.834 --rc geninfo_all_blocks=1 00:09:28.834 --rc geninfo_unexecuted_blocks=1 00:09:28.834 00:09:28.834 ' 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.834 --rc genhtml_branch_coverage=1 00:09:28.834 --rc genhtml_function_coverage=1 00:09:28.834 --rc genhtml_legend=1 00:09:28.834 --rc geninfo_all_blocks=1 00:09:28.834 --rc geninfo_unexecuted_blocks=1 00:09:28.834 00:09:28.834 ' 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:28.834 ************************************ 00:09:28.834 START TEST dd_invalid_arguments 00:09:28.834 ************************************ 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:28.834 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:28.834 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:28.834 00:09:28.834 CPU options: 00:09:28.834 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:28.834 (like [0,1,10]) 00:09:28.834 --lcores lcore to CPU mapping list. The list is in the format: 00:09:28.834 [<,lcores[@CPUs]>...] 00:09:28.834 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:28.834 Within the group, '-' is used for range separator, 00:09:28.834 ',' is used for single number separator. 00:09:28.834 '( )' can be omitted for single element group, 00:09:28.834 '@' can be omitted if cpus and lcores have the same value 00:09:28.834 --disable-cpumask-locks Disable CPU core lock files. 00:09:28.834 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:28.834 pollers in the app support interrupt mode) 00:09:28.834 -p, --main-core main (primary) core for DPDK 00:09:28.834 00:09:28.834 Configuration options: 00:09:28.834 -c, --config, --json JSON config file 00:09:28.834 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:28.834 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:28.834 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:28.834 --rpcs-allowed comma-separated list of permitted RPCS 00:09:28.834 --json-ignore-init-errors don't exit on invalid config entry 00:09:28.834 00:09:28.834 Memory options: 00:09:28.834 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:28.834 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:28.834 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:28.834 -R, --huge-unlink unlink huge files after initialization 00:09:28.834 -n, --mem-channels number of memory channels used for DPDK 00:09:28.834 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:28.834 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:28.834 --no-huge run without using hugepages 00:09:28.834 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:28.834 -i, --shm-id shared memory ID (optional) 00:09:28.834 -g, --single-file-segments force creating just one hugetlbfs file 00:09:28.834 00:09:28.834 PCI options: 00:09:28.834 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:28.834 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:28.834 -u, --no-pci disable PCI access 00:09:28.834 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:28.834 00:09:28.834 Log options: 00:09:28.834 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:28.834 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:28.834 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:28.834 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:28.834 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:09:28.835 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:09:28.835 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:09:28.835 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:09:28.835 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:09:28.835 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:09:28.835 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:28.835 --silence-noticelog disable notice level logging to stderr 00:09:28.835 00:09:28.835 Trace options: 00:09:28.835 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:28.835 setting 0 to disable trace (default 32768) 00:09:28.835 Tracepoints vary in size and can use more than one trace entry. 00:09:28.835 -e, --tpoint-group [:] 00:09:28.835 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:28.835 [2024-11-26 20:37:23.643735] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:28.835 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:28.835 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:09:28.835 bdev_raid, scheduler, all). 00:09:28.835 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:28.835 a tracepoint group. First tpoint inside a group can be enabled by 00:09:28.835 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:28.835 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:28.835 in /include/spdk_internal/trace_defs.h 00:09:28.835 00:09:28.835 Other options: 00:09:28.835 -h, --help show this usage 00:09:28.835 -v, --version print SPDK version 00:09:28.835 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:28.835 --env-context Opaque context for use of the env implementation 00:09:28.835 00:09:28.835 Application specific: 00:09:28.835 [--------- DD Options ---------] 00:09:28.835 --if Input file. Must specify either --if or --ib. 00:09:28.835 --ib Input bdev. Must specifier either --if or --ib 00:09:28.835 --of Output file. Must specify either --of or --ob. 00:09:28.835 --ob Output bdev. Must specify either --of or --ob. 00:09:28.835 --iflag Input file flags. 00:09:28.835 --oflag Output file flags. 00:09:28.835 --bs I/O unit size (default: 4096) 00:09:28.835 --qd Queue depth (default: 2) 00:09:28.835 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:28.835 --skip Skip this many I/O units at start of input. (default: 0) 00:09:28.835 --seek Skip this many I/O units at start of output. (default: 0) 00:09:28.835 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:28.835 --sparse Enable hole skipping in input target 00:09:28.835 Available iflag and oflag values: 00:09:28.835 append - append mode 00:09:28.835 direct - use direct I/O for data 00:09:28.835 directory - fail unless a directory 00:09:28.835 dsync - use synchronized I/O for data 00:09:28.835 noatime - do not update access time 00:09:28.835 noctty - do not assign controlling terminal from file 00:09:28.835 nofollow - do not follow symlinks 00:09:28.835 nonblock - use non-blocking I/O 00:09:28.835 sync - use synchronized I/O for data and metadata 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.835 00:09:28.835 real 0m0.072s 00:09:28.835 user 0m0.041s 00:09:28.835 sys 0m0.030s 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:28.835 ************************************ 00:09:28.835 END TEST dd_invalid_arguments 00:09:28.835 ************************************ 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:28.835 ************************************ 00:09:28.835 START TEST dd_double_input 00:09:28.835 ************************************ 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:28.835 [2024-11-26 20:37:23.790837] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.835 00:09:28.835 real 0m0.096s 00:09:28.835 user 0m0.062s 00:09:28.835 sys 0m0.032s 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.835 ************************************ 00:09:28.835 END TEST dd_double_input 00:09:28.835 ************************************ 00:09:28.835 20:37:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.094 ************************************ 00:09:29.094 START TEST dd_double_output 00:09:29.094 ************************************ 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.094 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:29.095 [2024-11-26 20:37:23.946906] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.095 00:09:29.095 real 0m0.087s 00:09:29.095 user 0m0.052s 00:09:29.095 sys 0m0.032s 00:09:29.095 ************************************ 00:09:29.095 END TEST dd_double_output 00:09:29.095 ************************************ 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.095 20:37:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.095 ************************************ 00:09:29.095 START TEST dd_no_input 00:09:29.095 ************************************ 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.095 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:29.354 [2024-11-26 20:37:24.097402] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.354 00:09:29.354 real 0m0.088s 00:09:29.354 user 0m0.058s 00:09:29.354 sys 0m0.029s 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.354 ************************************ 00:09:29.354 END TEST dd_no_input 00:09:29.354 ************************************ 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.354 ************************************ 00:09:29.354 START TEST dd_no_output 00:09:29.354 ************************************ 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.354 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.355 [2024-11-26 20:37:24.238578] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.355 00:09:29.355 real 0m0.076s 00:09:29.355 user 0m0.037s 00:09:29.355 sys 0m0.038s 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.355 ************************************ 00:09:29.355 END TEST dd_no_output 00:09:29.355 ************************************ 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.355 ************************************ 00:09:29.355 START TEST dd_wrong_blocksize 00:09:29.355 ************************************ 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.355 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:29.614 [2024-11-26 20:37:24.394761] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.614 00:09:29.614 real 0m0.101s 00:09:29.614 user 0m0.060s 00:09:29.614 sys 0m0.039s 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:29.614 ************************************ 00:09:29.614 END TEST dd_wrong_blocksize 00:09:29.614 ************************************ 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.614 ************************************ 00:09:29.614 START TEST dd_smaller_blocksize 00:09:29.614 ************************************ 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.614 20:37:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:29.614 [2024-11-26 20:37:24.546022] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:29.615 [2024-11-26 20:37:24.546138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62054 ] 00:09:29.874 [2024-11-26 20:37:24.730836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.874 [2024-11-26 20:37:24.834539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.133 [2024-11-26 20:37:24.883807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.391 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:30.759 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:30.759 [2024-11-26 20:37:25.696736] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:30.759 [2024-11-26 20:37:25.696849] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.020 [2024-11-26 20:37:25.800866] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.020 00:09:31.020 real 0m1.403s 00:09:31.020 user 0m0.515s 00:09:31.020 sys 0m0.775s 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:31.020 ************************************ 00:09:31.020 END TEST dd_smaller_blocksize 00:09:31.020 ************************************ 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:31.020 ************************************ 00:09:31.020 START TEST dd_invalid_count 00:09:31.020 ************************************ 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.020 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:31.021 20:37:25 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:31.021 [2024-11-26 20:37:26.004767] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.282 00:09:31.282 real 0m0.077s 00:09:31.282 user 0m0.044s 00:09:31.282 sys 0m0.032s 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:31.282 ************************************ 00:09:31.282 END TEST dd_invalid_count 00:09:31.282 ************************************ 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:31.282 ************************************ 00:09:31.282 START TEST dd_invalid_oflag 00:09:31.282 ************************************ 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:31.282 [2024-11-26 20:37:26.142800] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.282 00:09:31.282 real 0m0.083s 00:09:31.282 user 0m0.045s 00:09:31.282 sys 0m0.036s 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:31.282 ************************************ 00:09:31.282 END TEST dd_invalid_oflag 00:09:31.282 ************************************ 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:31.282 ************************************ 00:09:31.282 START TEST dd_invalid_iflag 00:09:31.282 ************************************ 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:31.282 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:31.541 [2024-11-26 20:37:26.294816] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.541 00:09:31.541 real 0m0.086s 00:09:31.541 user 0m0.052s 00:09:31.541 sys 0m0.033s 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:31.541 ************************************ 00:09:31.541 END TEST dd_invalid_iflag 00:09:31.541 ************************************ 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:31.541 ************************************ 00:09:31.541 START TEST dd_unknown_flag 00:09:31.541 ************************************ 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.541 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.542 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:31.542 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:31.542 [2024-11-26 20:37:26.431672] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:31.542 [2024-11-26 20:37:26.431761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62157 ] 00:09:31.801 [2024-11-26 20:37:26.581133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.801 [2024-11-26 20:37:26.677581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.801 [2024-11-26 20:37:26.727802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.801 [2024-11-26 20:37:26.766503] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:31.801 [2024-11-26 20:37:26.766593] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.801 [2024-11-26 20:37:26.766670] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:31.801 [2024-11-26 20:37:26.766694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.801 [2024-11-26 20:37:26.767034] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:31.801 [2024-11-26 20:37:26.767059] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.801 [2024-11-26 20:37:26.767127] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:31.801 [2024-11-26 20:37:26.767143] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:32.058 [2024-11-26 20:37:26.875447] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:32.058 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:32.058 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.058 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:32.058 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:32.058 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:32.058 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.058 00:09:32.058 real 0m0.586s 00:09:32.058 user 0m0.323s 00:09:32.058 sys 0m0.166s 00:09:32.059 ************************************ 00:09:32.059 END TEST dd_unknown_flag 00:09:32.059 ************************************ 00:09:32.059 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.059 20:37:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.059 ************************************ 00:09:32.059 START TEST dd_invalid_json 00:09:32.059 ************************************ 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.059 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:32.317 [2024-11-26 20:37:27.084831] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:32.317 [2024-11-26 20:37:27.084931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62186 ] 00:09:32.317 [2024-11-26 20:37:27.237095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.576 [2024-11-26 20:37:27.346603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.576 [2024-11-26 20:37:27.346754] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:32.576 [2024-11-26 20:37:27.346790] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:32.576 [2024-11-26 20:37:27.346815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.576 [2024-11-26 20:37:27.346891] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:32.576 ************************************ 00:09:32.576 END TEST dd_invalid_json 00:09:32.576 ************************************ 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.576 00:09:32.576 real 0m0.450s 00:09:32.576 user 0m0.262s 00:09:32.576 sys 0m0.084s 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.576 ************************************ 00:09:32.576 START TEST dd_invalid_seek 00:09:32.576 ************************************ 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.576 20:37:27 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:32.836 { 00:09:32.836 "subsystems": [ 00:09:32.836 { 00:09:32.836 "subsystem": "bdev", 00:09:32.836 "config": [ 00:09:32.836 { 00:09:32.836 "params": { 00:09:32.836 "block_size": 512, 00:09:32.836 "num_blocks": 512, 00:09:32.836 "name": "malloc0" 00:09:32.836 }, 00:09:32.836 "method": "bdev_malloc_create" 00:09:32.836 }, 00:09:32.836 { 00:09:32.836 "params": { 00:09:32.836 "block_size": 512, 00:09:32.836 "num_blocks": 512, 00:09:32.836 "name": "malloc1" 00:09:32.836 }, 00:09:32.836 "method": "bdev_malloc_create" 00:09:32.836 }, 00:09:32.836 { 00:09:32.836 "method": "bdev_wait_for_examine" 00:09:32.836 } 00:09:32.836 ] 00:09:32.836 } 00:09:32.836 ] 00:09:32.836 } 00:09:32.836 [2024-11-26 20:37:27.606048] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:32.836 [2024-11-26 20:37:27.606188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62215 ] 00:09:32.836 [2024-11-26 20:37:27.767951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.095 [2024-11-26 20:37:27.856961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.095 [2024-11-26 20:37:27.906387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.095 [2024-11-26 20:37:27.968939] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:33.095 [2024-11-26 20:37:27.969261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:33.095 [2024-11-26 20:37:28.078603] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.355 00:09:33.355 real 0m0.640s 00:09:33.355 user 0m0.431s 00:09:33.355 sys 0m0.165s 00:09:33.355 ************************************ 00:09:33.355 END TEST dd_invalid_seek 00:09:33.355 ************************************ 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.355 ************************************ 00:09:33.355 START TEST dd_invalid_skip 00:09:33.355 ************************************ 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.355 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:33.355 { 00:09:33.355 "subsystems": [ 00:09:33.355 { 00:09:33.355 "subsystem": "bdev", 00:09:33.355 "config": [ 00:09:33.355 { 00:09:33.355 "params": { 00:09:33.355 "block_size": 512, 00:09:33.355 "num_blocks": 512, 00:09:33.355 "name": "malloc0" 00:09:33.355 }, 00:09:33.355 "method": "bdev_malloc_create" 00:09:33.355 }, 00:09:33.355 { 00:09:33.355 "params": { 00:09:33.355 "block_size": 512, 00:09:33.355 "num_blocks": 512, 00:09:33.355 "name": "malloc1" 00:09:33.355 }, 00:09:33.355 "method": "bdev_malloc_create" 00:09:33.355 }, 00:09:33.355 { 00:09:33.355 "method": "bdev_wait_for_examine" 00:09:33.355 } 00:09:33.355 ] 00:09:33.355 } 00:09:33.355 ] 00:09:33.355 } 00:09:33.355 [2024-11-26 20:37:28.310551] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:33.355 [2024-11-26 20:37:28.310665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62254 ] 00:09:33.615 [2024-11-26 20:37:28.467641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.615 [2024-11-26 20:37:28.556695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.873 [2024-11-26 20:37:28.606786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.873 [2024-11-26 20:37:28.672472] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:33.873 [2024-11-26 20:37:28.672550] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:33.873 [2024-11-26 20:37:28.779969] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:33.873 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:33.873 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.874 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:33.874 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:33.874 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:33.874 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.874 00:09:33.874 real 0m0.623s 00:09:33.874 user 0m0.385s 00:09:33.874 sys 0m0.193s 00:09:34.133 ************************************ 00:09:34.133 END TEST dd_invalid_skip 00:09:34.133 ************************************ 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.133 ************************************ 00:09:34.133 START TEST dd_invalid_input_count 00:09:34.133 ************************************ 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.133 20:37:28 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:34.133 [2024-11-26 20:37:28.967470] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:34.133 [2024-11-26 20:37:28.967574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62288 ] 00:09:34.133 { 00:09:34.133 "subsystems": [ 00:09:34.133 { 00:09:34.133 "subsystem": "bdev", 00:09:34.133 "config": [ 00:09:34.133 { 00:09:34.133 "params": { 00:09:34.133 "block_size": 512, 00:09:34.133 "num_blocks": 512, 00:09:34.133 "name": "malloc0" 00:09:34.133 }, 00:09:34.133 "method": "bdev_malloc_create" 00:09:34.133 }, 00:09:34.133 { 00:09:34.133 "params": { 00:09:34.133 "block_size": 512, 00:09:34.133 "num_blocks": 512, 00:09:34.133 "name": "malloc1" 00:09:34.133 }, 00:09:34.133 "method": "bdev_malloc_create" 00:09:34.133 }, 00:09:34.133 { 00:09:34.133 "method": "bdev_wait_for_examine" 00:09:34.133 } 00:09:34.133 ] 00:09:34.133 } 00:09:34.133 ] 00:09:34.133 } 00:09:34.133 [2024-11-26 20:37:29.115059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.401 [2024-11-26 20:37:29.211463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.402 [2024-11-26 20:37:29.275727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.402 [2024-11-26 20:37:29.340656] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:34.402 [2024-11-26 20:37:29.340740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.663 [2024-11-26 20:37:29.449660] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.663 00:09:34.663 real 0m0.621s 00:09:34.663 user 0m0.381s 00:09:34.663 sys 0m0.197s 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:34.663 ************************************ 00:09:34.663 END TEST dd_invalid_input_count 00:09:34.663 ************************************ 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.663 ************************************ 00:09:34.663 START TEST dd_invalid_output_count 00:09:34.663 ************************************ 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.663 20:37:29 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:34.663 { 00:09:34.663 "subsystems": [ 00:09:34.663 { 00:09:34.663 "subsystem": "bdev", 00:09:34.663 "config": [ 00:09:34.663 { 00:09:34.663 "params": { 00:09:34.663 "block_size": 512, 00:09:34.663 "num_blocks": 512, 00:09:34.663 "name": "malloc0" 00:09:34.663 }, 00:09:34.663 "method": "bdev_malloc_create" 00:09:34.663 }, 00:09:34.663 { 00:09:34.663 "method": "bdev_wait_for_examine" 00:09:34.663 } 00:09:34.663 ] 00:09:34.663 } 00:09:34.663 ] 00:09:34.664 } 00:09:34.922 [2024-11-26 20:37:29.654936] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:34.922 [2024-11-26 20:37:29.655081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62321 ] 00:09:34.922 [2024-11-26 20:37:29.813115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.922 [2024-11-26 20:37:29.902659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.180 [2024-11-26 20:37:29.952278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.180 [2024-11-26 20:37:30.007291] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:35.180 [2024-11-26 20:37:30.007375] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.180 [2024-11-26 20:37:30.119342] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.439 00:09:35.439 real 0m0.611s 00:09:35.439 user 0m0.396s 00:09:35.439 sys 0m0.167s 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:35.439 ************************************ 00:09:35.439 END TEST dd_invalid_output_count 00:09:35.439 ************************************ 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.439 ************************************ 00:09:35.439 START TEST dd_bs_not_multiple 00:09:35.439 ************************************ 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:35.439 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.440 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:35.440 { 00:09:35.440 "subsystems": [ 00:09:35.440 { 00:09:35.440 "subsystem": "bdev", 00:09:35.440 "config": [ 00:09:35.440 { 00:09:35.440 "params": { 00:09:35.440 "block_size": 512, 00:09:35.440 "num_blocks": 512, 00:09:35.440 "name": "malloc0" 00:09:35.440 }, 00:09:35.440 "method": "bdev_malloc_create" 00:09:35.440 }, 00:09:35.440 { 00:09:35.440 "params": { 00:09:35.440 "block_size": 512, 00:09:35.440 "num_blocks": 512, 00:09:35.440 "name": "malloc1" 00:09:35.440 }, 00:09:35.440 "method": "bdev_malloc_create" 00:09:35.440 }, 00:09:35.440 { 00:09:35.440 "method": "bdev_wait_for_examine" 00:09:35.440 } 00:09:35.440 ] 00:09:35.440 } 00:09:35.440 ] 00:09:35.440 } 00:09:35.440 [2024-11-26 20:37:30.336102] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:35.440 [2024-11-26 20:37:30.336321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62357 ] 00:09:35.699 [2024-11-26 20:37:30.500110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.699 [2024-11-26 20:37:30.583355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.699 [2024-11-26 20:37:30.629857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.957 [2024-11-26 20:37:30.690938] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:35.957 [2024-11-26 20:37:30.691017] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.957 [2024-11-26 20:37:30.799810] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.957 00:09:35.957 real 0m0.642s 00:09:35.957 user 0m0.414s 00:09:35.957 sys 0m0.190s 00:09:35.957 ************************************ 00:09:35.957 END TEST dd_bs_not_multiple 00:09:35.957 ************************************ 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.957 20:37:30 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 00:09:36.216 real 0m7.575s 00:09:36.216 user 0m3.963s 00:09:36.216 sys 0m3.054s 00:09:36.216 20:37:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.216 ************************************ 00:09:36.216 END TEST spdk_dd_negative 00:09:36.216 ************************************ 00:09:36.216 20:37:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 00:09:36.216 real 1m33.438s 00:09:36.216 user 0m58.012s 00:09:36.216 sys 0m46.390s 00:09:36.216 20:37:31 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.216 ************************************ 00:09:36.216 END TEST spdk_dd 00:09:36.216 ************************************ 00:09:36.216 20:37:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 20:37:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:36.216 20:37:31 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:36.216 20:37:31 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:36.216 20:37:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.216 20:37:31 -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 20:37:31 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:36.216 20:37:31 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:36.216 20:37:31 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:36.216 20:37:31 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:36.216 20:37:31 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:36.216 20:37:31 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:36.216 20:37:31 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:36.216 20:37:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.216 20:37:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.216 20:37:31 -- common/autotest_common.sh@10 -- # set +x 00:09:36.216 ************************************ 00:09:36.216 START TEST nvmf_tcp 00:09:36.216 ************************************ 00:09:36.216 20:37:31 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:36.216 * Looking for test storage... 00:09:36.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:36.475 20:37:31 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.475 20:37:31 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.475 20:37:31 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.475 20:37:31 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.475 20:37:31 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.476 20:37:31 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.476 --rc genhtml_branch_coverage=1 00:09:36.476 --rc genhtml_function_coverage=1 00:09:36.476 --rc genhtml_legend=1 00:09:36.476 --rc geninfo_all_blocks=1 00:09:36.476 --rc geninfo_unexecuted_blocks=1 00:09:36.476 00:09:36.476 ' 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.476 --rc genhtml_branch_coverage=1 00:09:36.476 --rc genhtml_function_coverage=1 00:09:36.476 --rc genhtml_legend=1 00:09:36.476 --rc geninfo_all_blocks=1 00:09:36.476 --rc geninfo_unexecuted_blocks=1 00:09:36.476 00:09:36.476 ' 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.476 --rc genhtml_branch_coverage=1 00:09:36.476 --rc genhtml_function_coverage=1 00:09:36.476 --rc genhtml_legend=1 00:09:36.476 --rc geninfo_all_blocks=1 00:09:36.476 --rc geninfo_unexecuted_blocks=1 00:09:36.476 00:09:36.476 ' 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.476 --rc genhtml_branch_coverage=1 00:09:36.476 --rc genhtml_function_coverage=1 00:09:36.476 --rc genhtml_legend=1 00:09:36.476 --rc geninfo_all_blocks=1 00:09:36.476 --rc geninfo_unexecuted_blocks=1 00:09:36.476 00:09:36.476 ' 00:09:36.476 20:37:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:36.476 20:37:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:36.476 20:37:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.476 20:37:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.476 ************************************ 00:09:36.476 START TEST nvmf_target_core 00:09:36.476 ************************************ 00:09:36.476 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:36.476 * Looking for test storage... 00:09:36.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:36.476 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.476 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.476 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.735 --rc genhtml_branch_coverage=1 00:09:36.735 --rc genhtml_function_coverage=1 00:09:36.735 --rc genhtml_legend=1 00:09:36.735 --rc geninfo_all_blocks=1 00:09:36.735 --rc geninfo_unexecuted_blocks=1 00:09:36.735 00:09:36.735 ' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.735 --rc genhtml_branch_coverage=1 00:09:36.735 --rc genhtml_function_coverage=1 00:09:36.735 --rc genhtml_legend=1 00:09:36.735 --rc geninfo_all_blocks=1 00:09:36.735 --rc geninfo_unexecuted_blocks=1 00:09:36.735 00:09:36.735 ' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.735 --rc genhtml_branch_coverage=1 00:09:36.735 --rc genhtml_function_coverage=1 00:09:36.735 --rc genhtml_legend=1 00:09:36.735 --rc geninfo_all_blocks=1 00:09:36.735 --rc geninfo_unexecuted_blocks=1 00:09:36.735 00:09:36.735 ' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.735 --rc genhtml_branch_coverage=1 00:09:36.735 --rc genhtml_function_coverage=1 00:09:36.735 --rc genhtml_legend=1 00:09:36.735 --rc geninfo_all_blocks=1 00:09:36.735 --rc geninfo_unexecuted_blocks=1 00:09:36.735 00:09:36.735 ' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.735 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:36.735 20:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:36.736 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.736 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.736 20:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 ************************************ 00:09:36.736 START TEST nvmf_host_management 00:09:36.736 ************************************ 00:09:36.736 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:36.736 * Looking for test storage... 00:09:36.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.995 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.995 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.995 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.996 --rc genhtml_branch_coverage=1 00:09:36.996 --rc genhtml_function_coverage=1 00:09:36.996 --rc genhtml_legend=1 00:09:36.996 --rc geninfo_all_blocks=1 00:09:36.996 --rc geninfo_unexecuted_blocks=1 00:09:36.996 00:09:36.996 ' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.996 --rc genhtml_branch_coverage=1 00:09:36.996 --rc genhtml_function_coverage=1 00:09:36.996 --rc genhtml_legend=1 00:09:36.996 --rc geninfo_all_blocks=1 00:09:36.996 --rc geninfo_unexecuted_blocks=1 00:09:36.996 00:09:36.996 ' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.996 --rc genhtml_branch_coverage=1 00:09:36.996 --rc genhtml_function_coverage=1 00:09:36.996 --rc genhtml_legend=1 00:09:36.996 --rc geninfo_all_blocks=1 00:09:36.996 --rc geninfo_unexecuted_blocks=1 00:09:36.996 00:09:36.996 ' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.996 --rc genhtml_branch_coverage=1 00:09:36.996 --rc genhtml_function_coverage=1 00:09:36.996 --rc genhtml_legend=1 00:09:36.996 --rc geninfo_all_blocks=1 00:09:36.996 --rc geninfo_unexecuted_blocks=1 00:09:36.996 00:09:36.996 ' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.996 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:36.997 Cannot find device "nvmf_init_br" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:36.997 Cannot find device "nvmf_init_br2" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:36.997 Cannot find device "nvmf_tgt_br" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.997 Cannot find device "nvmf_tgt_br2" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:36.997 Cannot find device "nvmf_init_br" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:36.997 Cannot find device "nvmf_init_br2" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:36.997 Cannot find device "nvmf_tgt_br" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:36.997 Cannot find device "nvmf_tgt_br2" 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:36.997 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:37.255 Cannot find device "nvmf_br" 00:09:37.255 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:37.255 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:37.255 Cannot find device "nvmf_init_if" 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:37.255 Cannot find device "nvmf_init_if2" 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.255 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:37.513 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:37.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:09:37.514 00:09:37.514 --- 10.0.0.3 ping statistics --- 00:09:37.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.514 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:37.514 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:37.514 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:09:37.514 00:09:37.514 --- 10.0.0.4 ping statistics --- 00:09:37.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.514 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:09:37.514 00:09:37.514 --- 10.0.0.1 ping statistics --- 00:09:37.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.514 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:37.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:37.514 00:09:37.514 --- 10.0.0.2 ping statistics --- 00:09:37.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.514 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62705 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62705 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62705 ']' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.514 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 [2024-11-26 20:37:32.525783] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:37.772 [2024-11-26 20:37:32.525924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.772 [2024-11-26 20:37:32.687560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.051 [2024-11-26 20:37:32.785648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.051 [2024-11-26 20:37:32.785943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.051 [2024-11-26 20:37:32.786108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.051 [2024-11-26 20:37:32.786299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.051 [2024-11-26 20:37:32.786442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.051 [2024-11-26 20:37:32.787757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.051 [2024-11-26 20:37:32.787825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.051 [2024-11-26 20:37:32.787897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:38.051 [2024-11-26 20:37:32.788029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.051 [2024-11-26 20:37:32.838680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.035 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.035 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:39.035 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.035 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.035 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.035 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.035 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.036 [2024-11-26 20:37:33.757975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.036 Malloc0 00:09:39.036 [2024-11-26 20:37:33.855210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62765 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62765 /var/tmp/bdevperf.sock 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62765 ']' 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.036 { 00:09:39.036 "params": { 00:09:39.036 "name": "Nvme$subsystem", 00:09:39.036 "trtype": "$TEST_TRANSPORT", 00:09:39.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.036 "adrfam": "ipv4", 00:09:39.036 "trsvcid": "$NVMF_PORT", 00:09:39.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.036 "hdgst": ${hdgst:-false}, 00:09:39.036 "ddgst": ${ddgst:-false} 00:09:39.036 }, 00:09:39.036 "method": "bdev_nvme_attach_controller" 00:09:39.036 } 00:09:39.036 EOF 00:09:39.036 )") 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:39.036 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.036 "params": { 00:09:39.036 "name": "Nvme0", 00:09:39.036 "trtype": "tcp", 00:09:39.036 "traddr": "10.0.0.3", 00:09:39.036 "adrfam": "ipv4", 00:09:39.036 "trsvcid": "4420", 00:09:39.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:39.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:39.036 "hdgst": false, 00:09:39.036 "ddgst": false 00:09:39.036 }, 00:09:39.036 "method": "bdev_nvme_attach_controller" 00:09:39.036 }' 00:09:39.036 [2024-11-26 20:37:33.977588] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:39.036 [2024-11-26 20:37:33.977731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62765 ] 00:09:39.293 [2024-11-26 20:37:34.134056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.293 [2024-11-26 20:37:34.231022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.551 [2024-11-26 20:37:34.323716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.551 Running I/O for 10 seconds... 00:09:39.551 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.551 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:39.551 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:39.551 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.551 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:39.808 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:39.809 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.068 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.068 [2024-11-26 20:37:34.913040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.913971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.913989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.069 [2024-11-26 20:37:34.914459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.069 [2024-11-26 20:37:34.914478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.914969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.914987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.070 [2024-11-26 20:37:34.915378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.070 [2024-11-26 20:37:34.915395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c82d0 is same with the state(6) to be set 00:09:40.070 task offset: 73728 on job bdev=Nvme0n1 fails 00:09:40.070 00:09:40.070 Latency(us) 00:09:40.070 [2024-11-26T20:37:35.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.070 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:40.070 Job: Nvme0n1 ended in about 0.45 seconds with error 00:09:40.070 Verification LBA range: start 0x0 length 0x400 00:09:40.070 Nvme0n1 : 0.45 1287.47 80.47 143.05 0.00 43177.50 5149.26 66409.81 00:09:40.070 [2024-11-26T20:37:35.064Z] =================================================================================================================== 00:09:40.071 [2024-11-26T20:37:35.064Z] Total : 1287.47 80.47 143.05 0.00 43177.50 5149.26 66409.81 00:09:40.071 [2024-11-26 20:37:34.916750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:40.071 [2024-11-26 20:37:34.919794] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.071 [2024-11-26 20:37:34.919834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cdce0 (9): Bad file descriptor 00:09:40.071 [2024-11-26 20:37:34.921833] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:40.071 [2024-11-26 20:37:34.921955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:40.071 [2024-11-26 20:37:34.921987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.071 [2024-11-26 20:37:34.922009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:40.071 [2024-11-26 20:37:34.922027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:40.071 [2024-11-26 20:37:34.922042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:40.071 [2024-11-26 20:37:34.922058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23cdce0 00:09:40.071 [2024-11-26 20:37:34.922094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cdce0 (9): Bad file descriptor 00:09:40.071 [2024-11-26 20:37:34.922116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:09:40.071 [2024-11-26 20:37:34.922133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:09:40.071 [2024-11-26 20:37:34.922151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:09:40.071 [2024-11-26 20:37:34.922181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:09:40.071 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.071 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:40.071 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.071 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.071 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.071 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62765 00:09:41.004 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62765) - No such process 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.004 { 00:09:41.004 "params": { 00:09:41.004 "name": "Nvme$subsystem", 00:09:41.004 "trtype": "$TEST_TRANSPORT", 00:09:41.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.004 "adrfam": "ipv4", 00:09:41.004 "trsvcid": "$NVMF_PORT", 00:09:41.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.004 "hdgst": ${hdgst:-false}, 00:09:41.004 "ddgst": ${ddgst:-false} 00:09:41.004 }, 00:09:41.004 "method": "bdev_nvme_attach_controller" 00:09:41.004 } 00:09:41.004 EOF 00:09:41.004 )") 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:41.004 20:37:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.004 "params": { 00:09:41.004 "name": "Nvme0", 00:09:41.004 "trtype": "tcp", 00:09:41.004 "traddr": "10.0.0.3", 00:09:41.004 "adrfam": "ipv4", 00:09:41.004 "trsvcid": "4420", 00:09:41.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:41.004 "hdgst": false, 00:09:41.004 "ddgst": false 00:09:41.004 }, 00:09:41.004 "method": "bdev_nvme_attach_controller" 00:09:41.004 }' 00:09:41.262 [2024-11-26 20:37:36.001899] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:41.262 [2024-11-26 20:37:36.002023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62805 ] 00:09:41.262 [2024-11-26 20:37:36.154826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.262 [2024-11-26 20:37:36.249227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.520 [2024-11-26 20:37:36.342424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.520 Running I/O for 1 seconds... 00:09:42.899 1472.00 IOPS, 92.00 MiB/s 00:09:42.899 Latency(us) 00:09:42.899 [2024-11-26T20:37:37.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.899 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:42.899 Verification LBA range: start 0x0 length 0x400 00:09:42.899 Nvme0n1 : 1.02 1508.74 94.30 0.00 0.00 41640.38 5086.84 36200.84 00:09:42.899 [2024-11-26T20:37:37.892Z] =================================================================================================================== 00:09:42.899 [2024-11-26T20:37:37.892Z] Total : 1508.74 94.30 0.00 0.00 41640.38 5086.84 36200.84 00:09:42.899 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:42.899 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:42.899 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.900 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.900 rmmod nvme_tcp 00:09:43.158 rmmod nvme_fabrics 00:09:43.158 rmmod nvme_keyring 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62705 ']' 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62705 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62705 ']' 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62705 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62705 00:09:43.158 killing process with pid 62705 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62705' 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62705 00:09:43.158 20:37:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62705 00:09:43.417 [2024-11-26 20:37:38.205015] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:43.417 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:43.676 ************************************ 00:09:43.676 END TEST nvmf_host_management 00:09:43.676 ************************************ 00:09:43.676 00:09:43.676 real 0m6.885s 00:09:43.676 user 0m24.544s 00:09:43.676 sys 0m2.110s 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.676 ************************************ 00:09:43.676 START TEST nvmf_lvol 00:09:43.676 ************************************ 00:09:43.676 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:43.676 * Looking for test storage... 00:09:43.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.937 --rc genhtml_branch_coverage=1 00:09:43.937 --rc genhtml_function_coverage=1 00:09:43.937 --rc genhtml_legend=1 00:09:43.937 --rc geninfo_all_blocks=1 00:09:43.937 --rc geninfo_unexecuted_blocks=1 00:09:43.937 00:09:43.937 ' 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.937 --rc genhtml_branch_coverage=1 00:09:43.937 --rc genhtml_function_coverage=1 00:09:43.937 --rc genhtml_legend=1 00:09:43.937 --rc geninfo_all_blocks=1 00:09:43.937 --rc geninfo_unexecuted_blocks=1 00:09:43.937 00:09:43.937 ' 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.937 --rc genhtml_branch_coverage=1 00:09:43.937 --rc genhtml_function_coverage=1 00:09:43.937 --rc genhtml_legend=1 00:09:43.937 --rc geninfo_all_blocks=1 00:09:43.937 --rc geninfo_unexecuted_blocks=1 00:09:43.937 00:09:43.937 ' 00:09:43.937 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.937 --rc genhtml_branch_coverage=1 00:09:43.937 --rc genhtml_function_coverage=1 00:09:43.937 --rc genhtml_legend=1 00:09:43.938 --rc geninfo_all_blocks=1 00:09:43.938 --rc geninfo_unexecuted_blocks=1 00:09:43.938 00:09:43.938 ' 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.938 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:43.938 Cannot find device "nvmf_init_br" 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:43.938 Cannot find device "nvmf_init_br2" 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:43.938 Cannot find device "nvmf_tgt_br" 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.938 Cannot find device "nvmf_tgt_br2" 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:43.938 Cannot find device "nvmf_init_br" 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:43.938 Cannot find device "nvmf_init_br2" 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:43.938 Cannot find device "nvmf_tgt_br" 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:43.938 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:44.197 Cannot find device "nvmf_tgt_br2" 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:44.197 Cannot find device "nvmf_br" 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:44.197 Cannot find device "nvmf_init_if" 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:44.197 Cannot find device "nvmf_init_if2" 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.197 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:44.197 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:44.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.173 ms 00:09:44.454 00:09:44.454 --- 10.0.0.3 ping statistics --- 00:09:44.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.454 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:44.454 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:44.454 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:09:44.454 00:09:44.454 --- 10.0.0.4 ping statistics --- 00:09:44.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.454 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:09:44.454 00:09:44.454 --- 10.0.0.1 ping statistics --- 00:09:44.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.454 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:44.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:44.454 00:09:44.454 --- 10.0.0.2 ping statistics --- 00:09:44.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.454 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63073 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63073 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63073 ']' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:44.454 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 [2024-11-26 20:37:39.389573] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:44.454 [2024-11-26 20:37:39.389683] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.712 [2024-11-26 20:37:39.539896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.712 [2024-11-26 20:37:39.622632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.712 [2024-11-26 20:37:39.622950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.712 [2024-11-26 20:37:39.623099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.712 [2024-11-26 20:37:39.623316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.712 [2024-11-26 20:37:39.623389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.712 [2024-11-26 20:37:39.625092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.712 [2024-11-26 20:37:39.625234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.712 [2024-11-26 20:37:39.625232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.712 [2024-11-26 20:37:39.678931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.970 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.970 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:44.970 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.970 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.970 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:44.970 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.971 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.229 [2024-11-26 20:37:40.027454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.229 20:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.488 20:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:45.488 20:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.138 20:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:46.138 20:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:46.397 20:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:46.655 20:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7ab16ff2-267b-4fec-a44f-031c5e3ef7f9 00:09:46.655 20:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7ab16ff2-267b-4fec-a44f-031c5e3ef7f9 lvol 20 00:09:46.913 20:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b6f9d23b-b491-4289-9445-42ddd7b0dcb5 00:09:46.913 20:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.170 20:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6f9d23b-b491-4289-9445-42ddd7b0dcb5 00:09:47.426 20:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:47.684 [2024-11-26 20:37:42.628278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:47.684 20:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:48.249 20:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63141 00:09:48.249 20:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:48.249 20:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:49.180 20:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b6f9d23b-b491-4289-9445-42ddd7b0dcb5 MY_SNAPSHOT 00:09:49.438 20:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=282a6d42-7b28-419d-88af-ab04d33136ab 00:09:49.438 20:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b6f9d23b-b491-4289-9445-42ddd7b0dcb5 30 00:09:50.003 20:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 282a6d42-7b28-419d-88af-ab04d33136ab MY_CLONE 00:09:50.260 20:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fbedd851-362d-47a7-a170-f4434020b336 00:09:50.260 20:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate fbedd851-362d-47a7-a170-f4434020b336 00:09:50.827 20:37:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63141 00:09:58.973 Initializing NVMe Controllers 00:09:58.973 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:58.973 Controller IO queue size 128, less than required. 00:09:58.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:58.973 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:58.973 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:58.973 Initialization complete. Launching workers. 00:09:58.973 ======================================================== 00:09:58.973 Latency(us) 00:09:58.973 Device Information : IOPS MiB/s Average min max 00:09:58.973 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10099.00 39.45 12679.07 3527.48 63965.30 00:09:58.973 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10353.90 40.44 12367.24 3409.08 71036.44 00:09:58.973 ======================================================== 00:09:58.973 Total : 20452.89 79.89 12521.21 3409.08 71036.44 00:09:58.973 00:09:58.973 20:37:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.973 20:37:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b6f9d23b-b491-4289-9445-42ddd7b0dcb5 00:09:59.232 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ab16ff2-267b-4fec-a44f-031c5e3ef7f9 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.491 rmmod nvme_tcp 00:09:59.491 rmmod nvme_fabrics 00:09:59.491 rmmod nvme_keyring 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63073 ']' 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63073 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63073 ']' 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63073 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.491 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63073 00:09:59.751 killing process with pid 63073 00:09:59.751 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.751 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.751 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63073' 00:09:59.751 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63073 00:09:59.751 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63073 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:00.009 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.010 20:37:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:00.268 ************************************ 00:10:00.268 END TEST nvmf_lvol 00:10:00.268 ************************************ 00:10:00.268 00:10:00.268 real 0m16.478s 00:10:00.268 user 1m5.504s 00:10:00.268 sys 0m5.999s 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.268 20:37:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.268 ************************************ 00:10:00.269 START TEST nvmf_lvs_grow 00:10:00.269 ************************************ 00:10:00.269 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:00.269 * Looking for test storage... 00:10:00.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.269 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.269 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.269 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.528 --rc genhtml_branch_coverage=1 00:10:00.528 --rc genhtml_function_coverage=1 00:10:00.528 --rc genhtml_legend=1 00:10:00.528 --rc geninfo_all_blocks=1 00:10:00.528 --rc geninfo_unexecuted_blocks=1 00:10:00.528 00:10:00.528 ' 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.528 --rc genhtml_branch_coverage=1 00:10:00.528 --rc genhtml_function_coverage=1 00:10:00.528 --rc genhtml_legend=1 00:10:00.528 --rc geninfo_all_blocks=1 00:10:00.528 --rc geninfo_unexecuted_blocks=1 00:10:00.528 00:10:00.528 ' 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.528 --rc genhtml_branch_coverage=1 00:10:00.528 --rc genhtml_function_coverage=1 00:10:00.528 --rc genhtml_legend=1 00:10:00.528 --rc geninfo_all_blocks=1 00:10:00.528 --rc geninfo_unexecuted_blocks=1 00:10:00.528 00:10:00.528 ' 00:10:00.528 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.528 --rc genhtml_branch_coverage=1 00:10:00.528 --rc genhtml_function_coverage=1 00:10:00.528 --rc genhtml_legend=1 00:10:00.528 --rc geninfo_all_blocks=1 00:10:00.528 --rc geninfo_unexecuted_blocks=1 00:10:00.528 00:10:00.528 ' 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:00.529 Cannot find device "nvmf_init_br" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:00.529 Cannot find device "nvmf_init_br2" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:00.529 Cannot find device "nvmf_tgt_br" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.529 Cannot find device "nvmf_tgt_br2" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:00.529 Cannot find device "nvmf_init_br" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:00.529 Cannot find device "nvmf_init_br2" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:00.529 Cannot find device "nvmf_tgt_br" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:00.529 Cannot find device "nvmf_tgt_br2" 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:00.529 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:00.788 Cannot find device "nvmf_br" 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:00.788 Cannot find device "nvmf_init_if" 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:00.788 Cannot find device "nvmf_init_if2" 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:00.788 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:01.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:01.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:10:01.046 00:10:01.046 --- 10.0.0.3 ping statistics --- 00:10:01.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.046 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:01.046 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:01.046 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:10:01.046 00:10:01.046 --- 10.0.0.4 ping statistics --- 00:10:01.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.046 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:01.046 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:01.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:01.046 00:10:01.047 --- 10.0.0.1 ping statistics --- 00:10:01.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.047 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:01.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:01.047 00:10:01.047 --- 10.0.0.2 ping statistics --- 00:10:01.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.047 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63528 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63528 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:01.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63528 ']' 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.047 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.047 [2024-11-26 20:37:55.961586] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:01.047 [2024-11-26 20:37:55.961834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.305 [2024-11-26 20:37:56.114096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.305 [2024-11-26 20:37:56.200732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.305 [2024-11-26 20:37:56.200808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.305 [2024-11-26 20:37:56.200824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.305 [2024-11-26 20:37:56.200837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.305 [2024-11-26 20:37:56.200849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.305 [2024-11-26 20:37:56.201329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.305 [2024-11-26 20:37:56.251862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.244 20:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.244 20:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:02.244 20:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.244 20:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.244 20:37:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.244 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.244 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.503 [2024-11-26 20:37:57.305048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.503 ************************************ 00:10:02.503 START TEST lvs_grow_clean 00:10:02.503 ************************************ 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:02.503 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:02.762 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:02.762 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:03.021 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=67431f12-3c4b-46af-8481-1993793a6d4b 00:10:03.021 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:03.021 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:03.588 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:03.588 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:03.588 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67431f12-3c4b-46af-8481-1993793a6d4b lvol 150 00:10:03.846 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a3be3a6-408c-4142-9a9e-e314010494c0 00:10:03.846 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.846 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:04.105 [2024-11-26 20:37:58.920089] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:04.105 [2024-11-26 20:37:58.920210] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:04.105 true 00:10:04.105 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:04.105 20:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:04.373 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:04.373 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:04.633 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a3be3a6-408c-4142-9a9e-e314010494c0 00:10:05.200 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:05.457 [2024-11-26 20:38:00.224873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:05.457 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:05.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63626 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63626 /var/tmp/bdevperf.sock 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63626 ']' 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.716 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:05.716 [2024-11-26 20:38:00.543448] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:05.716 [2024-11-26 20:38:00.543738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63626 ] 00:10:05.716 [2024-11-26 20:38:00.688184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.976 [2024-11-26 20:38:00.770037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.976 [2024-11-26 20:38:00.853396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.927 20:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.927 20:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:06.927 20:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:06.927 Nvme0n1 00:10:06.927 20:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:07.516 [ 00:10:07.516 { 00:10:07.516 "name": "Nvme0n1", 00:10:07.516 "aliases": [ 00:10:07.516 "0a3be3a6-408c-4142-9a9e-e314010494c0" 00:10:07.516 ], 00:10:07.516 "product_name": "NVMe disk", 00:10:07.516 "block_size": 4096, 00:10:07.516 "num_blocks": 38912, 00:10:07.516 "uuid": "0a3be3a6-408c-4142-9a9e-e314010494c0", 00:10:07.516 "numa_id": -1, 00:10:07.516 "assigned_rate_limits": { 00:10:07.516 "rw_ios_per_sec": 0, 00:10:07.516 "rw_mbytes_per_sec": 0, 00:10:07.516 "r_mbytes_per_sec": 0, 00:10:07.516 "w_mbytes_per_sec": 0 00:10:07.516 }, 00:10:07.516 "claimed": false, 00:10:07.516 "zoned": false, 00:10:07.516 "supported_io_types": { 00:10:07.516 "read": true, 00:10:07.516 "write": true, 00:10:07.516 "unmap": true, 00:10:07.516 "flush": true, 00:10:07.516 "reset": true, 00:10:07.516 "nvme_admin": true, 00:10:07.516 "nvme_io": true, 00:10:07.516 "nvme_io_md": false, 00:10:07.516 "write_zeroes": true, 00:10:07.516 "zcopy": false, 00:10:07.516 "get_zone_info": false, 00:10:07.516 "zone_management": false, 00:10:07.516 "zone_append": false, 00:10:07.516 "compare": true, 00:10:07.516 "compare_and_write": true, 00:10:07.516 "abort": true, 00:10:07.516 "seek_hole": false, 00:10:07.516 "seek_data": false, 00:10:07.516 "copy": true, 00:10:07.516 "nvme_iov_md": false 00:10:07.516 }, 00:10:07.516 "memory_domains": [ 00:10:07.516 { 00:10:07.516 "dma_device_id": "system", 00:10:07.516 "dma_device_type": 1 00:10:07.516 } 00:10:07.516 ], 00:10:07.516 "driver_specific": { 00:10:07.516 "nvme": [ 00:10:07.516 { 00:10:07.516 "trid": { 00:10:07.516 "trtype": "TCP", 00:10:07.516 "adrfam": "IPv4", 00:10:07.516 "traddr": "10.0.0.3", 00:10:07.516 "trsvcid": "4420", 00:10:07.516 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:07.516 }, 00:10:07.516 "ctrlr_data": { 00:10:07.516 "cntlid": 1, 00:10:07.516 "vendor_id": "0x8086", 00:10:07.516 "model_number": "SPDK bdev Controller", 00:10:07.516 "serial_number": "SPDK0", 00:10:07.516 "firmware_revision": "25.01", 00:10:07.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:07.516 "oacs": { 00:10:07.516 "security": 0, 00:10:07.516 "format": 0, 00:10:07.516 "firmware": 0, 00:10:07.516 "ns_manage": 0 00:10:07.516 }, 00:10:07.516 "multi_ctrlr": true, 00:10:07.516 "ana_reporting": false 00:10:07.516 }, 00:10:07.516 "vs": { 00:10:07.516 "nvme_version": "1.3" 00:10:07.516 }, 00:10:07.516 "ns_data": { 00:10:07.516 "id": 1, 00:10:07.516 "can_share": true 00:10:07.516 } 00:10:07.516 } 00:10:07.516 ], 00:10:07.516 "mp_policy": "active_passive" 00:10:07.516 } 00:10:07.516 } 00:10:07.516 ] 00:10:07.516 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63645 00:10:07.517 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:07.517 20:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:07.517 Running I/O for 10 seconds... 00:10:08.451 Latency(us) 00:10:08.451 [2024-11-26T20:38:03.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.451 Nvme0n1 : 1.00 7846.00 30.65 0.00 0.00 0.00 0.00 0.00 00:10:08.451 [2024-11-26T20:38:03.444Z] =================================================================================================================== 00:10:08.451 [2024-11-26T20:38:03.444Z] Total : 7846.00 30.65 0.00 0.00 0.00 0.00 0.00 00:10:08.451 00:10:09.384 20:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:09.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.384 Nvme0n1 : 2.00 7923.50 30.95 0.00 0.00 0.00 0.00 0.00 00:10:09.384 [2024-11-26T20:38:04.377Z] =================================================================================================================== 00:10:09.384 [2024-11-26T20:38:04.377Z] Total : 7923.50 30.95 0.00 0.00 0.00 0.00 0.00 00:10:09.384 00:10:09.641 true 00:10:09.641 20:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:09.641 20:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:10.207 20:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:10.207 20:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:10.207 20:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63645 00:10:10.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.471 Nvme0n1 : 3.00 7907.00 30.89 0.00 0.00 0.00 0.00 0.00 00:10:10.471 [2024-11-26T20:38:05.464Z] =================================================================================================================== 00:10:10.471 [2024-11-26T20:38:05.464Z] Total : 7907.00 30.89 0.00 0.00 0.00 0.00 0.00 00:10:10.471 00:10:11.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.422 Nvme0n1 : 4.00 7835.25 30.61 0.00 0.00 0.00 0.00 0.00 00:10:11.422 [2024-11-26T20:38:06.415Z] =================================================================================================================== 00:10:11.422 [2024-11-26T20:38:06.415Z] Total : 7835.25 30.61 0.00 0.00 0.00 0.00 0.00 00:10:11.422 00:10:12.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.795 Nvme0n1 : 5.00 7708.80 30.11 0.00 0.00 0.00 0.00 0.00 00:10:12.795 [2024-11-26T20:38:07.788Z] =================================================================================================================== 00:10:12.795 [2024-11-26T20:38:07.788Z] Total : 7708.80 30.11 0.00 0.00 0.00 0.00 0.00 00:10:12.795 00:10:13.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.367 Nvme0n1 : 6.00 7715.17 30.14 0.00 0.00 0.00 0.00 0.00 00:10:13.367 [2024-11-26T20:38:08.360Z] =================================================================================================================== 00:10:13.367 [2024-11-26T20:38:08.360Z] Total : 7715.17 30.14 0.00 0.00 0.00 0.00 0.00 00:10:13.367 00:10:14.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.760 Nvme0n1 : 7.00 7719.71 30.16 0.00 0.00 0.00 0.00 0.00 00:10:14.760 [2024-11-26T20:38:09.753Z] =================================================================================================================== 00:10:14.760 [2024-11-26T20:38:09.753Z] Total : 7719.71 30.16 0.00 0.00 0.00 0.00 0.00 00:10:14.760 00:10:15.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.694 Nvme0n1 : 8.00 7675.50 29.98 0.00 0.00 0.00 0.00 0.00 00:10:15.694 [2024-11-26T20:38:10.687Z] =================================================================================================================== 00:10:15.694 [2024-11-26T20:38:10.687Z] Total : 7675.50 29.98 0.00 0.00 0.00 0.00 0.00 00:10:15.694 00:10:16.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.703 Nvme0n1 : 9.00 7620.11 29.77 0.00 0.00 0.00 0.00 0.00 00:10:16.703 [2024-11-26T20:38:11.696Z] =================================================================================================================== 00:10:16.703 [2024-11-26T20:38:11.696Z] Total : 7620.11 29.77 0.00 0.00 0.00 0.00 0.00 00:10:16.703 00:10:17.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.636 Nvme0n1 : 10.00 7594.70 29.67 0.00 0.00 0.00 0.00 0.00 00:10:17.636 [2024-11-26T20:38:12.629Z] =================================================================================================================== 00:10:17.636 [2024-11-26T20:38:12.629Z] Total : 7594.70 29.67 0.00 0.00 0.00 0.00 0.00 00:10:17.636 00:10:17.636 00:10:17.636 Latency(us) 00:10:17.636 [2024-11-26T20:38:12.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.636 Nvme0n1 : 10.01 7597.29 29.68 0.00 0.00 16843.46 12295.80 107853.53 00:10:17.636 [2024-11-26T20:38:12.629Z] =================================================================================================================== 00:10:17.636 [2024-11-26T20:38:12.629Z] Total : 7597.29 29.68 0.00 0.00 16843.46 12295.80 107853.53 00:10:17.636 { 00:10:17.636 "results": [ 00:10:17.636 { 00:10:17.636 "job": "Nvme0n1", 00:10:17.636 "core_mask": "0x2", 00:10:17.636 "workload": "randwrite", 00:10:17.636 "status": "finished", 00:10:17.636 "queue_depth": 128, 00:10:17.636 "io_size": 4096, 00:10:17.636 "runtime": 10.013437, 00:10:17.636 "iops": 7597.291519385401, 00:10:17.636 "mibps": 29.676919997599224, 00:10:17.636 "io_failed": 0, 00:10:17.636 "io_timeout": 0, 00:10:17.636 "avg_latency_us": 16843.46229894684, 00:10:17.636 "min_latency_us": 12295.801904761905, 00:10:17.636 "max_latency_us": 107853.53142857143 00:10:17.636 } 00:10:17.636 ], 00:10:17.636 "core_count": 1 00:10:17.636 } 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63626 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63626 ']' 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63626 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63626 00:10:17.636 killing process with pid 63626 00:10:17.636 Received shutdown signal, test time was about 10.000000 seconds 00:10:17.636 00:10:17.636 Latency(us) 00:10:17.636 [2024-11-26T20:38:12.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.636 [2024-11-26T20:38:12.629Z] =================================================================================================================== 00:10:17.636 [2024-11-26T20:38:12.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63626' 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63626 00:10:17.636 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63626 00:10:17.894 20:38:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:18.153 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:18.410 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:18.410 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:18.668 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:18.668 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:18.668 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:18.926 [2024-11-26 20:38:13.863781] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:18.926 20:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:19.493 request: 00:10:19.493 { 00:10:19.493 "uuid": "67431f12-3c4b-46af-8481-1993793a6d4b", 00:10:19.493 "method": "bdev_lvol_get_lvstores", 00:10:19.493 "req_id": 1 00:10:19.493 } 00:10:19.493 Got JSON-RPC error response 00:10:19.493 response: 00:10:19.493 { 00:10:19.493 "code": -19, 00:10:19.493 "message": "No such device" 00:10:19.493 } 00:10:19.493 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:19.493 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:19.493 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:19.493 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:19.493 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:19.751 aio_bdev 00:10:19.751 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a3be3a6-408c-4142-9a9e-e314010494c0 00:10:19.751 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0a3be3a6-408c-4142-9a9e-e314010494c0 00:10:19.751 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.751 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:19.751 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.751 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.751 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:20.009 20:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a3be3a6-408c-4142-9a9e-e314010494c0 -t 2000 00:10:20.267 [ 00:10:20.267 { 00:10:20.267 "name": "0a3be3a6-408c-4142-9a9e-e314010494c0", 00:10:20.267 "aliases": [ 00:10:20.267 "lvs/lvol" 00:10:20.267 ], 00:10:20.267 "product_name": "Logical Volume", 00:10:20.267 "block_size": 4096, 00:10:20.267 "num_blocks": 38912, 00:10:20.267 "uuid": "0a3be3a6-408c-4142-9a9e-e314010494c0", 00:10:20.267 "assigned_rate_limits": { 00:10:20.267 "rw_ios_per_sec": 0, 00:10:20.267 "rw_mbytes_per_sec": 0, 00:10:20.267 "r_mbytes_per_sec": 0, 00:10:20.267 "w_mbytes_per_sec": 0 00:10:20.267 }, 00:10:20.267 "claimed": false, 00:10:20.267 "zoned": false, 00:10:20.267 "supported_io_types": { 00:10:20.267 "read": true, 00:10:20.267 "write": true, 00:10:20.267 "unmap": true, 00:10:20.267 "flush": false, 00:10:20.267 "reset": true, 00:10:20.267 "nvme_admin": false, 00:10:20.267 "nvme_io": false, 00:10:20.267 "nvme_io_md": false, 00:10:20.267 "write_zeroes": true, 00:10:20.267 "zcopy": false, 00:10:20.267 "get_zone_info": false, 00:10:20.267 "zone_management": false, 00:10:20.267 "zone_append": false, 00:10:20.267 "compare": false, 00:10:20.267 "compare_and_write": false, 00:10:20.267 "abort": false, 00:10:20.267 "seek_hole": true, 00:10:20.267 "seek_data": true, 00:10:20.267 "copy": false, 00:10:20.267 "nvme_iov_md": false 00:10:20.267 }, 00:10:20.267 "driver_specific": { 00:10:20.267 "lvol": { 00:10:20.267 "lvol_store_uuid": "67431f12-3c4b-46af-8481-1993793a6d4b", 00:10:20.267 "base_bdev": "aio_bdev", 00:10:20.267 "thin_provision": false, 00:10:20.267 "num_allocated_clusters": 38, 00:10:20.267 "snapshot": false, 00:10:20.267 "clone": false, 00:10:20.267 "esnap_clone": false 00:10:20.267 } 00:10:20.267 } 00:10:20.267 } 00:10:20.267 ] 00:10:20.267 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:20.267 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:20.267 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:20.525 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:20.525 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:20.525 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:21.091 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:21.091 20:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0a3be3a6-408c-4142-9a9e-e314010494c0 00:10:21.349 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67431f12-3c4b-46af-8481-1993793a6d4b 00:10:21.607 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.866 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.433 ************************************ 00:10:22.433 END TEST lvs_grow_clean 00:10:22.433 ************************************ 00:10:22.433 00:10:22.433 real 0m19.839s 00:10:22.433 user 0m17.944s 00:10:22.433 sys 0m3.593s 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.433 ************************************ 00:10:22.433 START TEST lvs_grow_dirty 00:10:22.433 ************************************ 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.433 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:22.692 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:22.692 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:23.257 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:23.257 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:23.257 20:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:23.515 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:23.515 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:23.515 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 lvol 150 00:10:23.817 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 00:10:23.817 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:23.817 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:24.074 [2024-11-26 20:38:18.865059] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:24.074 [2024-11-26 20:38:18.865170] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:24.074 true 00:10:24.074 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:24.074 20:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:24.332 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:24.332 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:24.588 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 00:10:24.845 20:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:25.409 [2024-11-26 20:38:20.097737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.409 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63911 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63911 /var/tmp/bdevperf.sock 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63911 ']' 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.668 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:25.668 [2024-11-26 20:38:20.539208] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:25.668 [2024-11-26 20:38:20.539763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63911 ] 00:10:25.927 [2024-11-26 20:38:20.698397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.927 [2024-11-26 20:38:20.791904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.927 [2024-11-26 20:38:20.879598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.185 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.185 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:26.185 20:38:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:26.442 Nvme0n1 00:10:26.442 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:26.701 [ 00:10:26.701 { 00:10:26.701 "name": "Nvme0n1", 00:10:26.701 "aliases": [ 00:10:26.701 "49fbc4b4-a67d-4245-ae46-ce60a5bc1b58" 00:10:26.701 ], 00:10:26.701 "product_name": "NVMe disk", 00:10:26.701 "block_size": 4096, 00:10:26.701 "num_blocks": 38912, 00:10:26.701 "uuid": "49fbc4b4-a67d-4245-ae46-ce60a5bc1b58", 00:10:26.701 "numa_id": -1, 00:10:26.701 "assigned_rate_limits": { 00:10:26.701 "rw_ios_per_sec": 0, 00:10:26.701 "rw_mbytes_per_sec": 0, 00:10:26.701 "r_mbytes_per_sec": 0, 00:10:26.701 "w_mbytes_per_sec": 0 00:10:26.701 }, 00:10:26.701 "claimed": false, 00:10:26.701 "zoned": false, 00:10:26.701 "supported_io_types": { 00:10:26.701 "read": true, 00:10:26.701 "write": true, 00:10:26.701 "unmap": true, 00:10:26.701 "flush": true, 00:10:26.701 "reset": true, 00:10:26.701 "nvme_admin": true, 00:10:26.701 "nvme_io": true, 00:10:26.701 "nvme_io_md": false, 00:10:26.701 "write_zeroes": true, 00:10:26.701 "zcopy": false, 00:10:26.701 "get_zone_info": false, 00:10:26.701 "zone_management": false, 00:10:26.701 "zone_append": false, 00:10:26.701 "compare": true, 00:10:26.701 "compare_and_write": true, 00:10:26.701 "abort": true, 00:10:26.701 "seek_hole": false, 00:10:26.701 "seek_data": false, 00:10:26.701 "copy": true, 00:10:26.701 "nvme_iov_md": false 00:10:26.701 }, 00:10:26.701 "memory_domains": [ 00:10:26.701 { 00:10:26.701 "dma_device_id": "system", 00:10:26.701 "dma_device_type": 1 00:10:26.701 } 00:10:26.701 ], 00:10:26.701 "driver_specific": { 00:10:26.701 "nvme": [ 00:10:26.701 { 00:10:26.701 "trid": { 00:10:26.701 "trtype": "TCP", 00:10:26.701 "adrfam": "IPv4", 00:10:26.701 "traddr": "10.0.0.3", 00:10:26.701 "trsvcid": "4420", 00:10:26.701 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:26.701 }, 00:10:26.701 "ctrlr_data": { 00:10:26.701 "cntlid": 1, 00:10:26.701 "vendor_id": "0x8086", 00:10:26.701 "model_number": "SPDK bdev Controller", 00:10:26.701 "serial_number": "SPDK0", 00:10:26.701 "firmware_revision": "25.01", 00:10:26.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.701 "oacs": { 00:10:26.701 "security": 0, 00:10:26.701 "format": 0, 00:10:26.701 "firmware": 0, 00:10:26.701 "ns_manage": 0 00:10:26.701 }, 00:10:26.701 "multi_ctrlr": true, 00:10:26.701 "ana_reporting": false 00:10:26.701 }, 00:10:26.701 "vs": { 00:10:26.701 "nvme_version": "1.3" 00:10:26.701 }, 00:10:26.701 "ns_data": { 00:10:26.701 "id": 1, 00:10:26.701 "can_share": true 00:10:26.701 } 00:10:26.701 } 00:10:26.701 ], 00:10:26.701 "mp_policy": "active_passive" 00:10:26.701 } 00:10:26.701 } 00:10:26.701 ] 00:10:26.701 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63927 00:10:26.701 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:26.701 20:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.961 Running I/O for 10 seconds... 00:10:27.921 Latency(us) 00:10:27.921 [2024-11-26T20:38:22.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.921 Nvme0n1 : 1.00 8509.00 33.24 0.00 0.00 0.00 0.00 0.00 00:10:27.921 [2024-11-26T20:38:22.914Z] =================================================================================================================== 00:10:27.921 [2024-11-26T20:38:22.914Z] Total : 8509.00 33.24 0.00 0.00 0.00 0.00 0.00 00:10:27.921 00:10:28.858 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:28.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.858 Nvme0n1 : 2.00 8382.00 32.74 0.00 0.00 0.00 0.00 0.00 00:10:28.858 [2024-11-26T20:38:23.851Z] =================================================================================================================== 00:10:28.858 [2024-11-26T20:38:23.851Z] Total : 8382.00 32.74 0.00 0.00 0.00 0.00 0.00 00:10:28.858 00:10:29.117 true 00:10:29.117 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:29.117 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:29.377 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:29.377 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:29.377 20:38:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63927 00:10:29.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.943 Nvme0n1 : 3.00 8297.33 32.41 0.00 0.00 0.00 0.00 0.00 00:10:29.943 [2024-11-26T20:38:24.936Z] =================================================================================================================== 00:10:29.943 [2024-11-26T20:38:24.936Z] Total : 8297.33 32.41 0.00 0.00 0.00 0.00 0.00 00:10:29.943 00:10:30.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.880 Nvme0n1 : 4.00 8178.00 31.95 0.00 0.00 0.00 0.00 0.00 00:10:30.880 [2024-11-26T20:38:25.873Z] =================================================================================================================== 00:10:30.880 [2024-11-26T20:38:25.873Z] Total : 8178.00 31.95 0.00 0.00 0.00 0.00 0.00 00:10:30.880 00:10:31.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.814 Nvme0n1 : 5.00 7863.80 30.72 0.00 0.00 0.00 0.00 0.00 00:10:31.814 [2024-11-26T20:38:26.807Z] =================================================================================================================== 00:10:31.814 [2024-11-26T20:38:26.807Z] Total : 7863.80 30.72 0.00 0.00 0.00 0.00 0.00 00:10:31.814 00:10:33.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.196 Nvme0n1 : 6.00 7802.00 30.48 0.00 0.00 0.00 0.00 0.00 00:10:33.196 [2024-11-26T20:38:28.189Z] =================================================================================================================== 00:10:33.196 [2024-11-26T20:38:28.189Z] Total : 7802.00 30.48 0.00 0.00 0.00 0.00 0.00 00:10:33.196 00:10:34.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.131 Nvme0n1 : 7.00 7794.14 30.45 0.00 0.00 0.00 0.00 0.00 00:10:34.131 [2024-11-26T20:38:29.124Z] =================================================================================================================== 00:10:34.131 [2024-11-26T20:38:29.124Z] Total : 7794.14 30.45 0.00 0.00 0.00 0.00 0.00 00:10:34.131 00:10:35.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.066 Nvme0n1 : 8.00 7740.62 30.24 0.00 0.00 0.00 0.00 0.00 00:10:35.066 [2024-11-26T20:38:30.059Z] =================================================================================================================== 00:10:35.066 [2024-11-26T20:38:30.059Z] Total : 7740.62 30.24 0.00 0.00 0.00 0.00 0.00 00:10:35.066 00:10:35.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.998 Nvme0n1 : 9.00 7727.22 30.18 0.00 0.00 0.00 0.00 0.00 00:10:35.998 [2024-11-26T20:38:30.991Z] =================================================================================================================== 00:10:35.998 [2024-11-26T20:38:30.991Z] Total : 7727.22 30.18 0.00 0.00 0.00 0.00 0.00 00:10:35.998 00:10:36.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.932 Nvme0n1 : 10.00 7672.10 29.97 0.00 0.00 0.00 0.00 0.00 00:10:36.932 [2024-11-26T20:38:31.925Z] =================================================================================================================== 00:10:36.932 [2024-11-26T20:38:31.925Z] Total : 7672.10 29.97 0.00 0.00 0.00 0.00 0.00 00:10:36.932 00:10:36.932 00:10:36.932 Latency(us) 00:10:36.932 [2024-11-26T20:38:31.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.932 Nvme0n1 : 10.01 7680.08 30.00 0.00 0.00 16660.82 5180.46 201726.05 00:10:36.932 [2024-11-26T20:38:31.925Z] =================================================================================================================== 00:10:36.932 [2024-11-26T20:38:31.925Z] Total : 7680.08 30.00 0.00 0.00 16660.82 5180.46 201726.05 00:10:36.932 { 00:10:36.932 "results": [ 00:10:36.932 { 00:10:36.932 "job": "Nvme0n1", 00:10:36.932 "core_mask": "0x2", 00:10:36.932 "workload": "randwrite", 00:10:36.932 "status": "finished", 00:10:36.932 "queue_depth": 128, 00:10:36.932 "io_size": 4096, 00:10:36.932 "runtime": 10.006282, 00:10:36.932 "iops": 7680.0753766483895, 00:10:36.932 "mibps": 30.00029444003277, 00:10:36.932 "io_failed": 0, 00:10:36.932 "io_timeout": 0, 00:10:36.932 "avg_latency_us": 16660.817010377184, 00:10:36.932 "min_latency_us": 5180.464761904762, 00:10:36.932 "max_latency_us": 201726.04952380952 00:10:36.932 } 00:10:36.932 ], 00:10:36.932 "core_count": 1 00:10:36.932 } 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63911 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63911 ']' 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63911 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63911 00:10:36.932 killing process with pid 63911 00:10:36.932 Received shutdown signal, test time was about 10.000000 seconds 00:10:36.932 00:10:36.932 Latency(us) 00:10:36.932 [2024-11-26T20:38:31.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.932 [2024-11-26T20:38:31.925Z] =================================================================================================================== 00:10:36.932 [2024-11-26T20:38:31.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63911' 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63911 00:10:36.932 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63911 00:10:37.190 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:37.757 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.757 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:37.757 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:38.324 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:38.324 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:38.324 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63528 00:10:38.324 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63528 00:10:38.324 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63528 Killed "${NVMF_APP[@]}" "$@" 00:10:38.324 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:38.324 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:38.324 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=64065 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 64065 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64065 ']' 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.325 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.325 [2024-11-26 20:38:33.120482] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:38.325 [2024-11-26 20:38:33.120597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.325 [2024-11-26 20:38:33.283217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.583 [2024-11-26 20:38:33.360938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.583 [2024-11-26 20:38:33.361237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.583 [2024-11-26 20:38:33.361258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.583 [2024-11-26 20:38:33.361269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.583 [2024-11-26 20:38:33.361279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.583 [2024-11-26 20:38:33.361679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.583 [2024-11-26 20:38:33.408025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.151 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.151 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:39.151 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.151 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.151 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:39.408 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.408 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:39.666 [2024-11-26 20:38:34.477823] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:39.666 [2024-11-26 20:38:34.478376] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:39.666 [2024-11-26 20:38:34.478691] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.666 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:39.925 20:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 -t 2000 00:10:40.184 [ 00:10:40.184 { 00:10:40.184 "name": "49fbc4b4-a67d-4245-ae46-ce60a5bc1b58", 00:10:40.184 "aliases": [ 00:10:40.184 "lvs/lvol" 00:10:40.184 ], 00:10:40.184 "product_name": "Logical Volume", 00:10:40.184 "block_size": 4096, 00:10:40.184 "num_blocks": 38912, 00:10:40.184 "uuid": "49fbc4b4-a67d-4245-ae46-ce60a5bc1b58", 00:10:40.184 "assigned_rate_limits": { 00:10:40.184 "rw_ios_per_sec": 0, 00:10:40.184 "rw_mbytes_per_sec": 0, 00:10:40.184 "r_mbytes_per_sec": 0, 00:10:40.184 "w_mbytes_per_sec": 0 00:10:40.184 }, 00:10:40.184 "claimed": false, 00:10:40.184 "zoned": false, 00:10:40.184 "supported_io_types": { 00:10:40.184 "read": true, 00:10:40.184 "write": true, 00:10:40.184 "unmap": true, 00:10:40.184 "flush": false, 00:10:40.184 "reset": true, 00:10:40.184 "nvme_admin": false, 00:10:40.184 "nvme_io": false, 00:10:40.184 "nvme_io_md": false, 00:10:40.184 "write_zeroes": true, 00:10:40.184 "zcopy": false, 00:10:40.184 "get_zone_info": false, 00:10:40.184 "zone_management": false, 00:10:40.184 "zone_append": false, 00:10:40.184 "compare": false, 00:10:40.184 "compare_and_write": false, 00:10:40.184 "abort": false, 00:10:40.184 "seek_hole": true, 00:10:40.184 "seek_data": true, 00:10:40.184 "copy": false, 00:10:40.184 "nvme_iov_md": false 00:10:40.184 }, 00:10:40.184 "driver_specific": { 00:10:40.184 "lvol": { 00:10:40.184 "lvol_store_uuid": "f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6", 00:10:40.184 "base_bdev": "aio_bdev", 00:10:40.184 "thin_provision": false, 00:10:40.184 "num_allocated_clusters": 38, 00:10:40.184 "snapshot": false, 00:10:40.184 "clone": false, 00:10:40.184 "esnap_clone": false 00:10:40.184 } 00:10:40.184 } 00:10:40.184 } 00:10:40.184 ] 00:10:40.184 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:40.184 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:40.184 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:40.443 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:40.443 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:40.443 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:40.704 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:40.704 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:41.272 [2024-11-26 20:38:35.975250] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:41.272 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:41.531 request: 00:10:41.531 { 00:10:41.531 "uuid": "f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6", 00:10:41.531 "method": "bdev_lvol_get_lvstores", 00:10:41.531 "req_id": 1 00:10:41.531 } 00:10:41.531 Got JSON-RPC error response 00:10:41.531 response: 00:10:41.531 { 00:10:41.531 "code": -19, 00:10:41.531 "message": "No such device" 00:10:41.531 } 00:10:41.531 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:41.531 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:41.531 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:41.531 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:41.531 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:41.790 aio_bdev 00:10:41.790 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 00:10:41.790 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 00:10:41.790 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.790 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:41.790 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.790 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.790 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:42.048 20:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 -t 2000 00:10:42.306 [ 00:10:42.306 { 00:10:42.306 "name": "49fbc4b4-a67d-4245-ae46-ce60a5bc1b58", 00:10:42.306 "aliases": [ 00:10:42.306 "lvs/lvol" 00:10:42.306 ], 00:10:42.306 "product_name": "Logical Volume", 00:10:42.306 "block_size": 4096, 00:10:42.306 "num_blocks": 38912, 00:10:42.306 "uuid": "49fbc4b4-a67d-4245-ae46-ce60a5bc1b58", 00:10:42.306 "assigned_rate_limits": { 00:10:42.306 "rw_ios_per_sec": 0, 00:10:42.306 "rw_mbytes_per_sec": 0, 00:10:42.306 "r_mbytes_per_sec": 0, 00:10:42.306 "w_mbytes_per_sec": 0 00:10:42.306 }, 00:10:42.306 "claimed": false, 00:10:42.306 "zoned": false, 00:10:42.306 "supported_io_types": { 00:10:42.306 "read": true, 00:10:42.306 "write": true, 00:10:42.306 "unmap": true, 00:10:42.306 "flush": false, 00:10:42.306 "reset": true, 00:10:42.306 "nvme_admin": false, 00:10:42.306 "nvme_io": false, 00:10:42.306 "nvme_io_md": false, 00:10:42.306 "write_zeroes": true, 00:10:42.306 "zcopy": false, 00:10:42.306 "get_zone_info": false, 00:10:42.306 "zone_management": false, 00:10:42.306 "zone_append": false, 00:10:42.306 "compare": false, 00:10:42.306 "compare_and_write": false, 00:10:42.306 "abort": false, 00:10:42.306 "seek_hole": true, 00:10:42.306 "seek_data": true, 00:10:42.306 "copy": false, 00:10:42.306 "nvme_iov_md": false 00:10:42.306 }, 00:10:42.306 "driver_specific": { 00:10:42.306 "lvol": { 00:10:42.306 "lvol_store_uuid": "f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6", 00:10:42.306 "base_bdev": "aio_bdev", 00:10:42.306 "thin_provision": false, 00:10:42.306 "num_allocated_clusters": 38, 00:10:42.306 "snapshot": false, 00:10:42.306 "clone": false, 00:10:42.306 "esnap_clone": false 00:10:42.306 } 00:10:42.306 } 00:10:42.306 } 00:10:42.306 ] 00:10:42.306 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:42.306 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:42.306 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:42.564 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:42.564 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:42.564 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:42.822 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:42.822 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 49fbc4b4-a67d-4245-ae46-ce60a5bc1b58 00:10:43.395 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9a1ffb6-4e88-4c6c-9412-b3484c6cfed6 00:10:43.395 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.655 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.221 00:10:44.221 real 0m21.829s 00:10:44.221 user 0m44.605s 00:10:44.221 sys 0m8.552s 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.221 ************************************ 00:10:44.221 END TEST lvs_grow_dirty 00:10:44.221 ************************************ 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:44.221 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:44.222 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:44.222 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:44.222 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:44.222 nvmf_trace.0 00:10:44.222 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:44.222 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:44.222 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.222 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.481 rmmod nvme_tcp 00:10:44.481 rmmod nvme_fabrics 00:10:44.481 rmmod nvme_keyring 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 64065 ']' 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 64065 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 64065 ']' 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 64065 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64065 00:10:44.481 killing process with pid 64065 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64065' 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 64065 00:10:44.481 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 64065 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.740 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.000 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:45.001 ************************************ 00:10:45.001 END TEST nvmf_lvs_grow 00:10:45.001 ************************************ 00:10:45.001 00:10:45.001 real 0m44.671s 00:10:45.001 user 1m9.803s 00:10:45.001 sys 0m13.126s 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.001 ************************************ 00:10:45.001 START TEST nvmf_bdev_io_wait 00:10:45.001 ************************************ 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:45.001 * Looking for test storage... 00:10:45.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.001 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.270 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.271 --rc genhtml_branch_coverage=1 00:10:45.271 --rc genhtml_function_coverage=1 00:10:45.271 --rc genhtml_legend=1 00:10:45.271 --rc geninfo_all_blocks=1 00:10:45.271 --rc geninfo_unexecuted_blocks=1 00:10:45.271 00:10:45.271 ' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.271 --rc genhtml_branch_coverage=1 00:10:45.271 --rc genhtml_function_coverage=1 00:10:45.271 --rc genhtml_legend=1 00:10:45.271 --rc geninfo_all_blocks=1 00:10:45.271 --rc geninfo_unexecuted_blocks=1 00:10:45.271 00:10:45.271 ' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.271 --rc genhtml_branch_coverage=1 00:10:45.271 --rc genhtml_function_coverage=1 00:10:45.271 --rc genhtml_legend=1 00:10:45.271 --rc geninfo_all_blocks=1 00:10:45.271 --rc geninfo_unexecuted_blocks=1 00:10:45.271 00:10:45.271 ' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.271 --rc genhtml_branch_coverage=1 00:10:45.271 --rc genhtml_function_coverage=1 00:10:45.271 --rc genhtml_legend=1 00:10:45.271 --rc geninfo_all_blocks=1 00:10:45.271 --rc geninfo_unexecuted_blocks=1 00:10:45.271 00:10:45.271 ' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.271 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:45.271 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:45.272 Cannot find device "nvmf_init_br" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:45.272 Cannot find device "nvmf_init_br2" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:45.272 Cannot find device "nvmf_tgt_br" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.272 Cannot find device "nvmf_tgt_br2" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:45.272 Cannot find device "nvmf_init_br" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:45.272 Cannot find device "nvmf_init_br2" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:45.272 Cannot find device "nvmf_tgt_br" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:45.272 Cannot find device "nvmf_tgt_br2" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:45.272 Cannot find device "nvmf_br" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:45.272 Cannot find device "nvmf_init_if" 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:45.272 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:45.528 Cannot find device "nvmf_init_if2" 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:45.528 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.529 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.787 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:45.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:45.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:10:45.788 00:10:45.788 --- 10.0.0.3 ping statistics --- 00:10:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.788 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:45.788 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:45.788 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:10:45.788 00:10:45.788 --- 10.0.0.4 ping statistics --- 00:10:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.788 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:45.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:10:45.788 00:10:45.788 --- 10.0.0.1 ping statistics --- 00:10:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.788 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:45.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:10:45.788 00:10:45.788 --- 10.0.0.2 ping statistics --- 00:10:45.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.788 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64445 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64445 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64445 ']' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.788 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:45.788 [2024-11-26 20:38:40.661609] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:45.788 [2024-11-26 20:38:40.661725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.046 [2024-11-26 20:38:40.823490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.046 [2024-11-26 20:38:40.915458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.046 [2024-11-26 20:38:40.916113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.046 [2024-11-26 20:38:40.916440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.046 [2024-11-26 20:38:40.916716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.046 [2024-11-26 20:38:40.916964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.046 [2024-11-26 20:38:40.918829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.046 [2024-11-26 20:38:40.918990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.046 [2024-11-26 20:38:40.919596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.046 [2024-11-26 20:38:40.919607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.984 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 [2024-11-26 20:38:41.829992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 [2024-11-26 20:38:41.845708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 Malloc0 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.985 [2024-11-26 20:38:41.902992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64480 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64482 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64484 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.985 { 00:10:46.985 "params": { 00:10:46.985 "name": "Nvme$subsystem", 00:10:46.985 "trtype": "$TEST_TRANSPORT", 00:10:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.985 "adrfam": "ipv4", 00:10:46.985 "trsvcid": "$NVMF_PORT", 00:10:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.985 "hdgst": ${hdgst:-false}, 00:10:46.985 "ddgst": ${ddgst:-false} 00:10:46.985 }, 00:10:46.985 "method": "bdev_nvme_attach_controller" 00:10:46.985 } 00:10:46.985 EOF 00:10:46.985 )") 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64486 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.985 { 00:10:46.985 "params": { 00:10:46.985 "name": "Nvme$subsystem", 00:10:46.985 "trtype": "$TEST_TRANSPORT", 00:10:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.985 "adrfam": "ipv4", 00:10:46.985 "trsvcid": "$NVMF_PORT", 00:10:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.985 "hdgst": ${hdgst:-false}, 00:10:46.985 "ddgst": ${ddgst:-false} 00:10:46.985 }, 00:10:46.985 "method": "bdev_nvme_attach_controller" 00:10:46.985 } 00:10:46.985 EOF 00:10:46.985 )") 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.985 { 00:10:46.985 "params": { 00:10:46.985 "name": "Nvme$subsystem", 00:10:46.985 "trtype": "$TEST_TRANSPORT", 00:10:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.985 "adrfam": "ipv4", 00:10:46.985 "trsvcid": "$NVMF_PORT", 00:10:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.985 "hdgst": ${hdgst:-false}, 00:10:46.985 "ddgst": ${ddgst:-false} 00:10:46.985 }, 00:10:46.985 "method": "bdev_nvme_attach_controller" 00:10:46.985 } 00:10:46.985 EOF 00:10:46.985 )") 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:46.985 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.986 "params": { 00:10:46.986 "name": "Nvme1", 00:10:46.986 "trtype": "tcp", 00:10:46.986 "traddr": "10.0.0.3", 00:10:46.986 "adrfam": "ipv4", 00:10:46.986 "trsvcid": "4420", 00:10:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.986 "hdgst": false, 00:10:46.986 "ddgst": false 00:10:46.986 }, 00:10:46.986 "method": "bdev_nvme_attach_controller" 00:10:46.986 }' 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.986 "params": { 00:10:46.986 "name": "Nvme1", 00:10:46.986 "trtype": "tcp", 00:10:46.986 "traddr": "10.0.0.3", 00:10:46.986 "adrfam": "ipv4", 00:10:46.986 "trsvcid": "4420", 00:10:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.986 "hdgst": false, 00:10:46.986 "ddgst": false 00:10:46.986 }, 00:10:46.986 "method": "bdev_nvme_attach_controller" 00:10:46.986 }' 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.986 "params": { 00:10:46.986 "name": "Nvme1", 00:10:46.986 "trtype": "tcp", 00:10:46.986 "traddr": "10.0.0.3", 00:10:46.986 "adrfam": "ipv4", 00:10:46.986 "trsvcid": "4420", 00:10:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.986 "hdgst": false, 00:10:46.986 "ddgst": false 00:10:46.986 }, 00:10:46.986 "method": "bdev_nvme_attach_controller" 00:10:46.986 }' 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.986 { 00:10:46.986 "params": { 00:10:46.986 "name": "Nvme$subsystem", 00:10:46.986 "trtype": "$TEST_TRANSPORT", 00:10:46.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.986 "adrfam": "ipv4", 00:10:46.986 "trsvcid": "$NVMF_PORT", 00:10:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.986 "hdgst": ${hdgst:-false}, 00:10:46.986 "ddgst": ${ddgst:-false} 00:10:46.986 }, 00:10:46.986 "method": "bdev_nvme_attach_controller" 00:10:46.986 } 00:10:46.986 EOF 00:10:46.986 )") 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.986 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.986 "params": { 00:10:46.986 "name": "Nvme1", 00:10:46.986 "trtype": "tcp", 00:10:46.986 "traddr": "10.0.0.3", 00:10:46.986 "adrfam": "ipv4", 00:10:46.986 "trsvcid": "4420", 00:10:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.986 "hdgst": false, 00:10:46.986 "ddgst": false 00:10:46.986 }, 00:10:46.986 "method": "bdev_nvme_attach_controller" 00:10:46.986 }' 00:10:46.986 [2024-11-26 20:38:41.967877] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:46.986 [2024-11-26 20:38:41.968263] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:46.986 [2024-11-26 20:38:41.975061] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:46.986 [2024-11-26 20:38:41.975472] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:47.245 [2024-11-26 20:38:41.997878] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:47.245 [2024-11-26 20:38:42.000480] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:47.245 20:38:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64480 00:10:47.245 [2024-11-26 20:38:42.011692] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:47.245 [2024-11-26 20:38:42.011795] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:47.245 [2024-11-26 20:38:42.221196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.504 [2024-11-26 20:38:42.298843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:47.504 [2024-11-26 20:38:42.313039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.504 [2024-11-26 20:38:42.345542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.504 [2024-11-26 20:38:42.415935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.504 [2024-11-26 20:38:42.420599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:47.504 [2024-11-26 20:38:42.434785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.504 [2024-11-26 20:38:42.488748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:47.764 [2024-11-26 20:38:42.503916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.764 [2024-11-26 20:38:42.540371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.764 Running I/O for 1 seconds... 00:10:47.764 Running I/O for 1 seconds... 00:10:47.764 [2024-11-26 20:38:42.593493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:47.764 [2024-11-26 20:38:42.607568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.764 Running I/O for 1 seconds... 00:10:48.023 Running I/O for 1 seconds... 00:10:48.702 9977.00 IOPS, 38.97 MiB/s [2024-11-26T20:38:43.695Z] 6216.00 IOPS, 24.28 MiB/s 00:10:48.702 Latency(us) 00:10:48.702 [2024-11-26T20:38:43.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.702 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:48.702 Nvme1n1 : 1.01 10028.44 39.17 0.00 0.00 12708.98 4587.52 16976.94 00:10:48.702 [2024-11-26T20:38:43.695Z] =================================================================================================================== 00:10:48.702 [2024-11-26T20:38:43.695Z] Total : 10028.44 39.17 0.00 0.00 12708.98 4587.52 16976.94 00:10:48.702 00:10:48.702 Latency(us) 00:10:48.702 [2024-11-26T20:38:43.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.702 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:48.702 Nvme1n1 : 1.01 6265.96 24.48 0.00 0.00 20288.39 11546.82 47185.92 00:10:48.702 [2024-11-26T20:38:43.695Z] =================================================================================================================== 00:10:48.702 [2024-11-26T20:38:43.695Z] Total : 6265.96 24.48 0.00 0.00 20288.39 11546.82 47185.92 00:10:48.960 145600.00 IOPS, 568.75 MiB/s 00:10:48.960 Latency(us) 00:10:48.960 [2024-11-26T20:38:43.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.961 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:48.961 Nvme1n1 : 1.00 145292.28 567.55 0.00 0.00 876.28 417.40 2122.12 00:10:48.961 [2024-11-26T20:38:43.954Z] =================================================================================================================== 00:10:48.961 [2024-11-26T20:38:43.954Z] Total : 145292.28 567.55 0.00 0.00 876.28 417.40 2122.12 00:10:48.961 9797.00 IOPS, 38.27 MiB/s 00:10:48.961 Latency(us) 00:10:48.961 [2024-11-26T20:38:43.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.961 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:48.961 Nvme1n1 : 1.01 9870.61 38.56 0.00 0.00 12916.41 6210.32 23343.30 00:10:48.961 [2024-11-26T20:38:43.954Z] =================================================================================================================== 00:10:48.961 [2024-11-26T20:38:43.954Z] Total : 9870.61 38.56 0.00 0.00 12916.41 6210.32 23343.30 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64482 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64484 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64486 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.961 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:49.221 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.221 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:49.221 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.221 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.221 rmmod nvme_tcp 00:10:49.221 rmmod nvme_fabrics 00:10:49.221 rmmod nvme_keyring 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64445 ']' 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64445 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64445 ']' 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64445 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64445 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64445' 00:10:49.221 killing process with pid 64445 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64445 00:10:49.221 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64445 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.480 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.739 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:49.739 00:10:49.739 real 0m4.646s 00:10:49.739 user 0m18.055s 00:10:49.739 sys 0m2.850s 00:10:49.739 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.739 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.739 ************************************ 00:10:49.739 END TEST nvmf_bdev_io_wait 00:10:49.739 ************************************ 00:10:49.739 20:38:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:49.739 20:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.739 20:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.739 20:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.739 ************************************ 00:10:49.739 START TEST nvmf_queue_depth 00:10:49.739 ************************************ 00:10:49.740 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:49.740 * Looking for test storage... 00:10:49.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.740 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:49.740 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:49.740 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.999 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.000 --rc genhtml_branch_coverage=1 00:10:50.000 --rc genhtml_function_coverage=1 00:10:50.000 --rc genhtml_legend=1 00:10:50.000 --rc geninfo_all_blocks=1 00:10:50.000 --rc geninfo_unexecuted_blocks=1 00:10:50.000 00:10:50.000 ' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.000 --rc genhtml_branch_coverage=1 00:10:50.000 --rc genhtml_function_coverage=1 00:10:50.000 --rc genhtml_legend=1 00:10:50.000 --rc geninfo_all_blocks=1 00:10:50.000 --rc geninfo_unexecuted_blocks=1 00:10:50.000 00:10:50.000 ' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.000 --rc genhtml_branch_coverage=1 00:10:50.000 --rc genhtml_function_coverage=1 00:10:50.000 --rc genhtml_legend=1 00:10:50.000 --rc geninfo_all_blocks=1 00:10:50.000 --rc geninfo_unexecuted_blocks=1 00:10:50.000 00:10:50.000 ' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.000 --rc genhtml_branch_coverage=1 00:10:50.000 --rc genhtml_function_coverage=1 00:10:50.000 --rc genhtml_legend=1 00:10:50.000 --rc geninfo_all_blocks=1 00:10:50.000 --rc geninfo_unexecuted_blocks=1 00:10:50.000 00:10:50.000 ' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.000 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:50.000 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:50.001 Cannot find device "nvmf_init_br" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:50.001 Cannot find device "nvmf_init_br2" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:50.001 Cannot find device "nvmf_tgt_br" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.001 Cannot find device "nvmf_tgt_br2" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:50.001 Cannot find device "nvmf_init_br" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:50.001 Cannot find device "nvmf_init_br2" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:50.001 Cannot find device "nvmf_tgt_br" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:50.001 Cannot find device "nvmf_tgt_br2" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:50.001 Cannot find device "nvmf_br" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:50.001 Cannot find device "nvmf_init_if" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:50.001 Cannot find device "nvmf_init_if2" 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.001 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.260 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:50.260 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.260 20:38:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:50.260 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:50.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:50.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:10:50.261 00:10:50.261 --- 10.0.0.3 ping statistics --- 00:10:50.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.261 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:50.261 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:50.261 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:50.261 00:10:50.261 --- 10.0.0.4 ping statistics --- 00:10:50.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.261 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:50.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:50.261 00:10:50.261 --- 10.0.0.1 ping statistics --- 00:10:50.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.261 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:50.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:10:50.261 00:10:50.261 --- 10.0.0.2 ping statistics --- 00:10:50.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.261 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.261 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64770 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64770 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64770 ']' 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.519 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.519 [2024-11-26 20:38:45.362473] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:50.519 [2024-11-26 20:38:45.362850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.777 [2024-11-26 20:38:45.543321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.777 [2024-11-26 20:38:45.620505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.778 [2024-11-26 20:38:45.620788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.778 [2024-11-26 20:38:45.620807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.778 [2024-11-26 20:38:45.620816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.778 [2024-11-26 20:38:45.620824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.778 [2024-11-26 20:38:45.621150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.778 [2024-11-26 20:38:45.701147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.714 [2024-11-26 20:38:46.497129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.714 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.715 Malloc0 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.715 [2024-11-26 20:38:46.561488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:51.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64807 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64807 /var/tmp/bdevperf.sock 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64807 ']' 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.715 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.715 [2024-11-26 20:38:46.610365] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:51.715 [2024-11-26 20:38:46.610651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64807 ] 00:10:51.974 [2024-11-26 20:38:46.757351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.974 [2024-11-26 20:38:46.840207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.974 [2024-11-26 20:38:46.889974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.974 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.974 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:51.974 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:51.974 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.974 20:38:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.233 NVMe0n1 00:10:52.233 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.233 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:52.233 Running I/O for 10 seconds... 00:10:54.573 7804.00 IOPS, 30.48 MiB/s [2024-11-26T20:38:50.499Z] 8256.50 IOPS, 32.25 MiB/s [2024-11-26T20:38:51.437Z] 8238.67 IOPS, 32.18 MiB/s [2024-11-26T20:38:52.372Z] 8615.75 IOPS, 33.66 MiB/s [2024-11-26T20:38:53.328Z] 8877.00 IOPS, 34.68 MiB/s [2024-11-26T20:38:54.279Z] 9059.33 IOPS, 35.39 MiB/s [2024-11-26T20:38:55.217Z] 9187.29 IOPS, 35.89 MiB/s [2024-11-26T20:38:56.590Z] 9352.38 IOPS, 36.53 MiB/s [2024-11-26T20:38:57.208Z] 9420.11 IOPS, 36.80 MiB/s [2024-11-26T20:38:57.470Z] 9506.80 IOPS, 37.14 MiB/s 00:11:02.477 Latency(us) 00:11:02.477 [2024-11-26T20:38:57.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.477 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:02.477 Verification LBA range: start 0x0 length 0x4000 00:11:02.477 NVMe0n1 : 10.09 9519.07 37.18 0.00 0.00 107057.99 21470.84 81888.79 00:11:02.477 [2024-11-26T20:38:57.470Z] =================================================================================================================== 00:11:02.477 [2024-11-26T20:38:57.470Z] Total : 9519.07 37.18 0.00 0.00 107057.99 21470.84 81888.79 00:11:02.477 { 00:11:02.477 "results": [ 00:11:02.477 { 00:11:02.477 "job": "NVMe0n1", 00:11:02.477 "core_mask": "0x1", 00:11:02.477 "workload": "verify", 00:11:02.477 "status": "finished", 00:11:02.477 "verify_range": { 00:11:02.477 "start": 0, 00:11:02.477 "length": 16384 00:11:02.477 }, 00:11:02.477 "queue_depth": 1024, 00:11:02.477 "io_size": 4096, 00:11:02.477 "runtime": 10.087439, 00:11:02.477 "iops": 9519.066236732633, 00:11:02.477 "mibps": 37.18385248723685, 00:11:02.477 "io_failed": 0, 00:11:02.477 "io_timeout": 0, 00:11:02.477 "avg_latency_us": 107057.98820651599, 00:11:02.477 "min_latency_us": 21470.841904761906, 00:11:02.477 "max_latency_us": 81888.79238095238 00:11:02.478 } 00:11:02.478 ], 00:11:02.478 "core_count": 1 00:11:02.478 } 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64807 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64807 ']' 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64807 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64807 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64807' 00:11:02.478 killing process with pid 64807 00:11:02.478 Received shutdown signal, test time was about 10.000000 seconds 00:11:02.478 00:11:02.478 Latency(us) 00:11:02.478 [2024-11-26T20:38:57.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.478 [2024-11-26T20:38:57.471Z] =================================================================================================================== 00:11:02.478 [2024-11-26T20:38:57.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64807 00:11:02.478 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64807 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.750 rmmod nvme_tcp 00:11:02.750 rmmod nvme_fabrics 00:11:02.750 rmmod nvme_keyring 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64770 ']' 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64770 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64770 ']' 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64770 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64770 00:11:02.750 killing process with pid 64770 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64770' 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64770 00:11:02.750 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64770 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.008 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.265 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:03.265 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:03.265 00:11:03.265 real 0m13.681s 00:11:03.265 user 0m22.403s 00:11:03.265 sys 0m2.767s 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.265 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:03.265 ************************************ 00:11:03.265 END TEST nvmf_queue_depth 00:11:03.265 ************************************ 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.524 ************************************ 00:11:03.524 START TEST nvmf_target_multipath 00:11:03.524 ************************************ 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:03.524 * Looking for test storage... 00:11:03.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:03.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.524 --rc genhtml_branch_coverage=1 00:11:03.524 --rc genhtml_function_coverage=1 00:11:03.524 --rc genhtml_legend=1 00:11:03.524 --rc geninfo_all_blocks=1 00:11:03.524 --rc geninfo_unexecuted_blocks=1 00:11:03.524 00:11:03.524 ' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:03.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.524 --rc genhtml_branch_coverage=1 00:11:03.524 --rc genhtml_function_coverage=1 00:11:03.524 --rc genhtml_legend=1 00:11:03.524 --rc geninfo_all_blocks=1 00:11:03.524 --rc geninfo_unexecuted_blocks=1 00:11:03.524 00:11:03.524 ' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:03.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.524 --rc genhtml_branch_coverage=1 00:11:03.524 --rc genhtml_function_coverage=1 00:11:03.524 --rc genhtml_legend=1 00:11:03.524 --rc geninfo_all_blocks=1 00:11:03.524 --rc geninfo_unexecuted_blocks=1 00:11:03.524 00:11:03.524 ' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:03.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.524 --rc genhtml_branch_coverage=1 00:11:03.524 --rc genhtml_function_coverage=1 00:11:03.524 --rc genhtml_legend=1 00:11:03.524 --rc geninfo_all_blocks=1 00:11:03.524 --rc geninfo_unexecuted_blocks=1 00:11:03.524 00:11:03.524 ' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:03.524 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.525 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.525 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:03.783 Cannot find device "nvmf_init_br" 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:03.783 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:03.784 Cannot find device "nvmf_init_br2" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:03.784 Cannot find device "nvmf_tgt_br" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.784 Cannot find device "nvmf_tgt_br2" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:03.784 Cannot find device "nvmf_init_br" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:03.784 Cannot find device "nvmf_init_br2" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:03.784 Cannot find device "nvmf_tgt_br" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:03.784 Cannot find device "nvmf_tgt_br2" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:03.784 Cannot find device "nvmf_br" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:03.784 Cannot find device "nvmf_init_if" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:03.784 Cannot find device "nvmf_init_if2" 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:03.784 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:04.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:11:04.044 00:11:04.044 --- 10.0.0.3 ping statistics --- 00:11:04.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.044 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:04.044 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:04.044 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:11:04.044 00:11:04.044 --- 10.0.0.4 ping statistics --- 00:11:04.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.044 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:11:04.044 00:11:04.044 --- 10.0.0.1 ping statistics --- 00:11:04.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.044 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:04.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:11:04.044 00:11:04.044 --- 10.0.0.2 ping statistics --- 00:11:04.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.044 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65178 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65178 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65178 ']' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.044 20:38:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.044 [2024-11-26 20:38:59.014997] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:04.044 [2024-11-26 20:38:59.015081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.312 [2024-11-26 20:38:59.170116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.312 [2024-11-26 20:38:59.250737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.312 [2024-11-26 20:38:59.250807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.312 [2024-11-26 20:38:59.250824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.312 [2024-11-26 20:38:59.250837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.312 [2024-11-26 20:38:59.250849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.312 [2024-11-26 20:38:59.252360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.312 [2024-11-26 20:38:59.252471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.312 [2024-11-26 20:38:59.252572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.312 [2024-11-26 20:38:59.252573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.570 [2024-11-26 20:38:59.339405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.570 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.570 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:04.570 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.570 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.570 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:04.570 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.570 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.829 [2024-11-26 20:38:59.794346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.088 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:05.346 Malloc0 00:11:05.346 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:05.605 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.864 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:06.123 [2024-11-26 20:39:00.992205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:06.123 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:06.382 [2024-11-26 20:39:01.276491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:06.382 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:06.664 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:06.664 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.664 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.664 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.664 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.664 20:39:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65266 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:09.215 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:09.215 [global] 00:11:09.215 thread=1 00:11:09.215 invalidate=1 00:11:09.215 rw=randrw 00:11:09.215 time_based=1 00:11:09.215 runtime=6 00:11:09.215 ioengine=libaio 00:11:09.215 direct=1 00:11:09.215 bs=4096 00:11:09.215 iodepth=128 00:11:09.215 norandommap=0 00:11:09.215 numjobs=1 00:11:09.215 00:11:09.215 verify_dump=1 00:11:09.215 verify_backlog=512 00:11:09.215 verify_state_save=0 00:11:09.215 do_verify=1 00:11:09.215 verify=crc32c-intel 00:11:09.215 [job0] 00:11:09.215 filename=/dev/nvme0n1 00:11:09.215 Could not set queue depth (nvme0n1) 00:11:09.215 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.215 fio-3.35 00:11:09.215 Starting 1 thread 00:11:09.782 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:10.040 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:10.299 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:10.560 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:10.825 20:39:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65266 00:11:15.015 00:11:15.015 job0: (groupid=0, jobs=1): err= 0: pid=65291: Tue Nov 26 20:39:09 2024 00:11:15.015 read: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(269MiB/6002msec) 00:11:15.015 slat (usec): min=2, max=6426, avg=50.35, stdev=205.37 00:11:15.015 clat (usec): min=1004, max=15697, avg=7607.57, stdev=1437.68 00:11:15.015 lat (usec): min=1501, max=15706, avg=7657.92, stdev=1442.56 00:11:15.015 clat percentiles (usec): 00:11:15.015 | 1.00th=[ 4015], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 6849], 00:11:15.015 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:11:15.015 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[11076], 00:11:15.015 | 99.00th=[11994], 99.50th=[12387], 99.90th=[12911], 99.95th=[13304], 00:11:15.015 | 99.99th=[15533] 00:11:15.015 bw ( KiB/s): min=17032, max=27656, per=52.45%, avg=24083.64, stdev=3321.81, samples=11 00:11:15.015 iops : min= 4258, max= 6914, avg=6020.91, stdev=830.45, samples=11 00:11:15.015 write: IOPS=6476, BW=25.3MiB/s (26.5MB/s)(142MiB/5605msec); 0 zone resets 00:11:15.015 slat (usec): min=3, max=5119, avg=59.86, stdev=149.88 00:11:15.015 clat (usec): min=1525, max=15319, avg=6531.91, stdev=1274.73 00:11:15.015 lat (usec): min=1548, max=15344, avg=6591.77, stdev=1278.50 00:11:15.015 clat percentiles (usec): 00:11:15.015 | 1.00th=[ 3130], 5.00th=[ 3851], 10.00th=[ 4621], 20.00th=[ 5997], 00:11:15.015 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:11:15.015 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7570], 95.00th=[ 7898], 00:11:15.015 | 99.00th=[10552], 99.50th=[11076], 99.90th=[12387], 99.95th=[13960], 00:11:15.015 | 99.99th=[15270] 00:11:15.015 bw ( KiB/s): min=17992, max=27120, per=92.75%, avg=24025.45, stdev=2929.94, samples=11 00:11:15.015 iops : min= 4498, max= 6780, avg=6006.36, stdev=732.49, samples=11 00:11:15.015 lat (msec) : 2=0.01%, 4=2.71%, 10=91.12%, 20=6.16% 00:11:15.015 cpu : usr=5.33%, sys=21.78%, ctx=5961, majf=0, minf=54 00:11:15.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:15.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.015 issued rwts: total=68900,36298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.015 00:11:15.015 Run status group 0 (all jobs): 00:11:15.015 READ: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=269MiB (282MB), run=6002-6002msec 00:11:15.016 WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=142MiB (149MB), run=5605-5605msec 00:11:15.016 00:11:15.016 Disk stats (read/write): 00:11:15.016 nvme0n1: ios=67690/35765, merge=0/0, ticks=493558/218019, in_queue=711577, util=98.70% 00:11:15.016 20:39:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:15.582 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:15.841 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65374 00:11:15.841 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:15.841 20:39:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:15.841 [global] 00:11:15.841 thread=1 00:11:15.841 invalidate=1 00:11:15.841 rw=randrw 00:11:15.841 time_based=1 00:11:15.841 runtime=6 00:11:15.841 ioengine=libaio 00:11:15.841 direct=1 00:11:15.841 bs=4096 00:11:15.841 iodepth=128 00:11:15.841 norandommap=0 00:11:15.841 numjobs=1 00:11:15.841 00:11:15.841 verify_dump=1 00:11:15.841 verify_backlog=512 00:11:15.841 verify_state_save=0 00:11:15.841 do_verify=1 00:11:15.841 verify=crc32c-intel 00:11:15.841 [job0] 00:11:15.841 filename=/dev/nvme0n1 00:11:15.841 Could not set queue depth (nvme0n1) 00:11:15.841 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.841 fio-3.35 00:11:15.841 Starting 1 thread 00:11:16.777 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:17.036 20:39:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:17.294 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:17.553 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:17.812 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:17.813 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:17.813 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:17.813 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:17.813 20:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65374 00:11:22.127 00:11:22.127 job0: (groupid=0, jobs=1): err= 0: pid=65395: Tue Nov 26 20:39:16 2024 00:11:22.127 read: IOPS=12.7k, BW=49.7MiB/s (52.2MB/s)(299MiB/6002msec) 00:11:22.127 slat (usec): min=3, max=7460, avg=38.90, stdev=166.57 00:11:22.127 clat (usec): min=246, max=20728, avg=6964.20, stdev=2123.69 00:11:22.127 lat (usec): min=257, max=20738, avg=7003.10, stdev=2130.72 00:11:22.127 clat percentiles (usec): 00:11:22.127 | 1.00th=[ 1057], 5.00th=[ 3163], 10.00th=[ 4293], 20.00th=[ 5997], 00:11:22.127 | 30.00th=[ 6587], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7242], 00:11:22.127 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8848], 95.00th=[10814], 00:11:22.127 | 99.00th=[12911], 99.50th=[13960], 99.90th=[17695], 99.95th=[18744], 00:11:22.127 | 99.99th=[20317] 00:11:22.127 bw ( KiB/s): min=10664, max=36776, per=52.93%, avg=26958.55, stdev=7811.23, samples=11 00:11:22.127 iops : min= 2666, max= 9194, avg=6739.64, stdev=1952.81, samples=11 00:11:22.127 write: IOPS=7498, BW=29.3MiB/s (30.7MB/s)(150MiB/5121msec); 0 zone resets 00:11:22.127 slat (usec): min=5, max=4722, avg=48.87, stdev=120.80 00:11:22.127 clat (usec): min=192, max=19237, avg=5986.53, stdev=1870.80 00:11:22.127 lat (usec): min=237, max=19260, avg=6035.40, stdev=1879.78 00:11:22.127 clat percentiles (usec): 00:11:22.127 | 1.00th=[ 857], 5.00th=[ 2900], 10.00th=[ 3589], 20.00th=[ 4424], 00:11:22.127 | 30.00th=[ 5473], 40.00th=[ 6063], 50.00th=[ 6325], 60.00th=[ 6587], 00:11:22.127 | 70.00th=[ 6783], 80.00th=[ 7111], 90.00th=[ 7504], 95.00th=[ 8225], 00:11:22.127 | 99.00th=[11207], 99.50th=[12256], 99.90th=[16712], 99.95th=[17957], 00:11:22.127 | 99.99th=[19006] 00:11:22.127 bw ( KiB/s): min=10976, max=37712, per=89.79%, avg=26932.36, stdev=7666.03, samples=11 00:11:22.127 iops : min= 2744, max= 9428, avg=6733.09, stdev=1916.51, samples=11 00:11:22.127 lat (usec) : 250=0.01%, 500=0.14%, 750=0.35%, 1000=0.56% 00:11:22.127 lat (msec) : 2=1.75%, 4=7.98%, 10=83.32%, 20=5.89%, 50=0.01% 00:11:22.127 cpu : usr=5.40%, sys=23.13%, ctx=7145, majf=0, minf=90 00:11:22.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:22.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.127 issued rwts: total=76425,38400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.127 00:11:22.127 Run status group 0 (all jobs): 00:11:22.127 READ: bw=49.7MiB/s (52.2MB/s), 49.7MiB/s-49.7MiB/s (52.2MB/s-52.2MB/s), io=299MiB (313MB), run=6002-6002msec 00:11:22.127 WRITE: bw=29.3MiB/s (30.7MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=150MiB (157MB), run=5121-5121msec 00:11:22.127 00:11:22.127 Disk stats (read/write): 00:11:22.127 nvme0n1: ios=74682/38400, merge=0/0, ticks=497121/215890, in_queue=713011, util=98.53% 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:22.127 20:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:22.386 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:22.386 rmmod nvme_tcp 00:11:22.386 rmmod nvme_fabrics 00:11:22.386 rmmod nvme_keyring 00:11:22.645 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:22.645 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:22.645 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:22.645 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65178 ']' 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65178 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65178 ']' 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65178 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65178 00:11:22.646 killing process with pid 65178 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65178' 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65178 00:11:22.646 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65178 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:22.905 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:23.169 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:23.169 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.169 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.169 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:23.169 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.169 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.169 20:39:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:23.169 00:11:23.169 real 0m19.706s 00:11:23.169 user 1m11.020s 00:11:23.169 sys 0m11.985s 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.169 ************************************ 00:11:23.169 END TEST nvmf_target_multipath 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 ************************************ 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 ************************************ 00:11:23.169 START TEST nvmf_zcopy 00:11:23.169 ************************************ 00:11:23.169 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:23.169 * Looking for test storage... 00:11:23.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.437 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.437 --rc genhtml_branch_coverage=1 00:11:23.437 --rc genhtml_function_coverage=1 00:11:23.437 --rc genhtml_legend=1 00:11:23.437 --rc geninfo_all_blocks=1 00:11:23.437 --rc geninfo_unexecuted_blocks=1 00:11:23.437 00:11:23.437 ' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.438 --rc genhtml_branch_coverage=1 00:11:23.438 --rc genhtml_function_coverage=1 00:11:23.438 --rc genhtml_legend=1 00:11:23.438 --rc geninfo_all_blocks=1 00:11:23.438 --rc geninfo_unexecuted_blocks=1 00:11:23.438 00:11:23.438 ' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.438 --rc genhtml_branch_coverage=1 00:11:23.438 --rc genhtml_function_coverage=1 00:11:23.438 --rc genhtml_legend=1 00:11:23.438 --rc geninfo_all_blocks=1 00:11:23.438 --rc geninfo_unexecuted_blocks=1 00:11:23.438 00:11:23.438 ' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.438 --rc genhtml_branch_coverage=1 00:11:23.438 --rc genhtml_function_coverage=1 00:11:23.438 --rc genhtml_legend=1 00:11:23.438 --rc geninfo_all_blocks=1 00:11:23.438 --rc geninfo_unexecuted_blocks=1 00:11:23.438 00:11:23.438 ' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.438 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:23.438 Cannot find device "nvmf_init_br" 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:23.438 Cannot find device "nvmf_init_br2" 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:23.438 Cannot find device "nvmf_tgt_br" 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.438 Cannot find device "nvmf_tgt_br2" 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:23.438 Cannot find device "nvmf_init_br" 00:11:23.438 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:23.439 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:23.439 Cannot find device "nvmf_init_br2" 00:11:23.439 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:23.439 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:23.439 Cannot find device "nvmf_tgt_br" 00:11:23.439 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:23.439 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:23.698 Cannot find device "nvmf_tgt_br2" 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:23.698 Cannot find device "nvmf_br" 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:23.698 Cannot find device "nvmf_init_if" 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:23.698 Cannot find device "nvmf_init_if2" 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:23.698 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:23.699 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:23.699 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:23.699 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:23.699 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:23.699 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:23.699 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:23.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:23.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:11:23.958 00:11:23.958 --- 10.0.0.3 ping statistics --- 00:11:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.958 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:23.958 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:23.958 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:11:23.958 00:11:23.958 --- 10.0.0.4 ping statistics --- 00:11:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.958 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:23.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:23.958 00:11:23.958 --- 10.0.0.1 ping statistics --- 00:11:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.958 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:23.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:11:23.958 00:11:23.958 --- 10.0.0.2 ping statistics --- 00:11:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.958 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65697 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65697 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65697 ']' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.958 20:39:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 [2024-11-26 20:39:18.878969] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:23.958 [2024-11-26 20:39:18.879095] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.217 [2024-11-26 20:39:19.040141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.217 [2024-11-26 20:39:19.117433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.217 [2024-11-26 20:39:19.117501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.217 [2024-11-26 20:39:19.117516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.217 [2024-11-26 20:39:19.117530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.217 [2024-11-26 20:39:19.117542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.217 [2024-11-26 20:39:19.118005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.217 [2024-11-26 20:39:19.203000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 [2024-11-26 20:39:19.347422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 [2024-11-26 20:39:19.363581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 malloc0 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:24.475 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:24.476 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.476 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.476 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.476 { 00:11:24.476 "params": { 00:11:24.476 "name": "Nvme$subsystem", 00:11:24.476 "trtype": "$TEST_TRANSPORT", 00:11:24.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.476 "adrfam": "ipv4", 00:11:24.476 "trsvcid": "$NVMF_PORT", 00:11:24.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.476 "hdgst": ${hdgst:-false}, 00:11:24.476 "ddgst": ${ddgst:-false} 00:11:24.476 }, 00:11:24.476 "method": "bdev_nvme_attach_controller" 00:11:24.476 } 00:11:24.476 EOF 00:11:24.476 )") 00:11:24.476 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:24.476 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:24.476 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:24.476 20:39:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.476 "params": { 00:11:24.476 "name": "Nvme1", 00:11:24.476 "trtype": "tcp", 00:11:24.476 "traddr": "10.0.0.3", 00:11:24.476 "adrfam": "ipv4", 00:11:24.476 "trsvcid": "4420", 00:11:24.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.476 "hdgst": false, 00:11:24.476 "ddgst": false 00:11:24.476 }, 00:11:24.476 "method": "bdev_nvme_attach_controller" 00:11:24.476 }' 00:11:24.735 [2024-11-26 20:39:19.467144] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:24.735 [2024-11-26 20:39:19.467269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65722 ] 00:11:24.735 [2024-11-26 20:39:19.626136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.735 [2024-11-26 20:39:19.720578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.995 [2024-11-26 20:39:19.787428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.995 Running I/O for 10 seconds... 00:11:27.311 7319.00 IOPS, 57.18 MiB/s [2024-11-26T20:39:23.241Z] 7387.00 IOPS, 57.71 MiB/s [2024-11-26T20:39:24.178Z] 7420.00 IOPS, 57.97 MiB/s [2024-11-26T20:39:25.145Z] 7444.50 IOPS, 58.16 MiB/s [2024-11-26T20:39:26.081Z] 7402.40 IOPS, 57.83 MiB/s [2024-11-26T20:39:27.019Z] 7401.17 IOPS, 57.82 MiB/s [2024-11-26T20:39:27.955Z] 7381.29 IOPS, 57.67 MiB/s [2024-11-26T20:39:29.331Z] 7399.88 IOPS, 57.81 MiB/s [2024-11-26T20:39:30.270Z] 7407.33 IOPS, 57.87 MiB/s [2024-11-26T20:39:30.270Z] 7381.30 IOPS, 57.67 MiB/s 00:11:35.277 Latency(us) 00:11:35.277 [2024-11-26T20:39:30.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.277 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:35.277 Verification LBA range: start 0x0 length 0x1000 00:11:35.277 Nvme1n1 : 10.01 7383.00 57.68 0.00 0.00 17286.95 2761.87 24966.10 00:11:35.277 [2024-11-26T20:39:30.270Z] =================================================================================================================== 00:11:35.277 [2024-11-26T20:39:30.270Z] Total : 7383.00 57.68 0.00 0.00 17286.95 2761.87 24966.10 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65846 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.277 { 00:11:35.277 "params": { 00:11:35.277 "name": "Nvme$subsystem", 00:11:35.277 "trtype": "$TEST_TRANSPORT", 00:11:35.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.277 "adrfam": "ipv4", 00:11:35.277 "trsvcid": "$NVMF_PORT", 00:11:35.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.277 "hdgst": ${hdgst:-false}, 00:11:35.277 "ddgst": ${ddgst:-false} 00:11:35.277 }, 00:11:35.277 "method": "bdev_nvme_attach_controller" 00:11:35.277 } 00:11:35.277 EOF 00:11:35.277 )") 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:35.277 [2024-11-26 20:39:30.158179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.277 [2024-11-26 20:39:30.158256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:35.277 20:39:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.277 "params": { 00:11:35.277 "name": "Nvme1", 00:11:35.277 "trtype": "tcp", 00:11:35.277 "traddr": "10.0.0.3", 00:11:35.277 "adrfam": "ipv4", 00:11:35.277 "trsvcid": "4420", 00:11:35.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.277 "hdgst": false, 00:11:35.277 "ddgst": false 00:11:35.277 }, 00:11:35.277 "method": "bdev_nvme_attach_controller" 00:11:35.277 }' 00:11:35.277 [2024-11-26 20:39:30.170137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.277 [2024-11-26 20:39:30.170185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.277 [2024-11-26 20:39:30.182124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.277 [2024-11-26 20:39:30.182162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.277 [2024-11-26 20:39:30.194117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.277 [2024-11-26 20:39:30.194150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.277 [2024-11-26 20:39:30.210130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.277 [2024-11-26 20:39:30.210172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.277 [2024-11-26 20:39:30.215778] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:35.277 [2024-11-26 20:39:30.215896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65846 ] 00:11:35.277 [2024-11-26 20:39:30.222132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.277 [2024-11-26 20:39:30.222184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.277 [2024-11-26 20:39:30.234137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.278 [2024-11-26 20:39:30.234177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.278 [2024-11-26 20:39:30.246152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.278 [2024-11-26 20:39:30.246190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.278 [2024-11-26 20:39:30.258136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.278 [2024-11-26 20:39:30.258175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.270152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.270197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.282164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.282197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.294165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.294197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.306143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.306184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.318174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.318227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.330180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.330211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.342165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.342211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.354165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.354201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.366188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.366218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.374925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.537 [2024-11-26 20:39:30.378193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.378227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.390200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.390243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.402203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.402241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.414188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.414223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.426201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.426235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.442207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.442243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.454205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.454236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.462249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.537 [2024-11-26 20:39:30.466205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.466231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.478239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.478291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.490246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.490299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.506266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.506322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.518237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.537 [2024-11-26 20:39:30.518276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.537 [2024-11-26 20:39:30.523350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.796 [2024-11-26 20:39:30.530245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.530291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.542244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.542283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.554228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.554258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.566254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.566291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.578253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.578287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.594256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.594292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.606284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.606328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.618327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.618367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.630311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.630348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.642299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.642330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 Running I/O for 5 seconds... 00:11:35.796 [2024-11-26 20:39:30.658216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.658274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.674387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.674428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.691751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.691792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.707216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.707274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.724892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.724942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.739109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.739151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.755883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.755924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.796 [2024-11-26 20:39:30.772851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.796 [2024-11-26 20:39:30.772895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.789200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.789250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.806383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.806436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.822234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.822277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.839865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.839917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.854406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.854446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.870713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.870756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.890585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.890639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.907403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.907452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.924163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.924217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.940699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.940741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.957694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.957748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.973951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.973989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:30.991518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:30.991560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:31.007208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:31.007246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:31.023148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:31.023200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.055 [2024-11-26 20:39:31.039590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.055 [2024-11-26 20:39:31.039644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.060851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.060902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.081253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.081291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.098610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.098666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.114641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.114706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.132135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.132206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.148891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.148936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.164040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.164082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.179871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.179915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.198001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.198051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.214311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.214372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.231069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.231122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.246373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.246425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.263357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.263395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.278350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.278391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.314 [2024-11-26 20:39:31.294326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.314 [2024-11-26 20:39:31.294376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.310049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.310091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.327824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.327866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.342929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.342971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.353826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.353860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.369453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.369507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.386598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.386651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.402811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.402854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.420869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.420913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.435984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.436030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.447395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.447436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.463389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.463430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.481411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.481450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.500184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.500233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.511725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.511771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.530471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.530513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.544358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.544413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.602 [2024-11-26 20:39:31.561180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.602 [2024-11-26 20:39:31.561238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.577084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.577127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.594767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.594815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.614551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.614599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.631731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.631772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 12910.00 IOPS, 100.86 MiB/s [2024-11-26T20:39:31.880Z] [2024-11-26 20:39:31.648416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.648455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.660614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.660657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.676974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.677013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.692258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.692298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.709510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.709547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.725626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.725668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.742150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.742201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.758809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.758847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.776407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.776448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.792980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.793023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.810308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.810348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.825434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.825473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.842291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.887 [2024-11-26 20:39:31.842334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.887 [2024-11-26 20:39:31.858710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.888 [2024-11-26 20:39:31.858761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.888 [2024-11-26 20:39:31.875587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.888 [2024-11-26 20:39:31.875633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:31.892237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:31.892279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:31.912938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:31.912986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:31.930210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:31.930248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:31.946744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:31.946801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:31.963866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:31.963909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:31.982335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:31.982370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:31.997961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:31.998002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:32.016137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:32.016190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:32.030094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:32.030143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:32.045827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:32.045875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.146 [2024-11-26 20:39:32.063952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.146 [2024-11-26 20:39:32.063998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.147 [2024-11-26 20:39:32.079271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.147 [2024-11-26 20:39:32.079324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.147 [2024-11-26 20:39:32.090696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.147 [2024-11-26 20:39:32.090739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.147 [2024-11-26 20:39:32.106859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.147 [2024-11-26 20:39:32.106901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.147 [2024-11-26 20:39:32.122858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.147 [2024-11-26 20:39:32.122910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.140174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.140226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.156064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.156121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.174255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.174305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.188495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.188541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.203997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.204041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.222575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.222619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.236710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.236754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.251723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.251763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.263345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.263383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.279855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.279895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.296627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.296666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.312859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.312920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.333237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.333291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.350206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.350252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.366403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.366453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.407 [2024-11-26 20:39:32.384616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.407 [2024-11-26 20:39:32.384677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.398720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.398768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.413951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.414005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.422899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.422944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.439796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.439860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.457890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.457938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.473520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.473564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.492765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.492817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.511883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.511929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.529460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.529505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.546575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.546627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.568276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.568337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.583734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.583781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.666 [2024-11-26 20:39:32.592794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.666 [2024-11-26 20:39:32.592834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.667 [2024-11-26 20:39:32.608278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.667 [2024-11-26 20:39:32.608318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.667 [2024-11-26 20:39:32.623450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.667 [2024-11-26 20:39:32.623494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.667 [2024-11-26 20:39:32.641118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.667 [2024-11-26 20:39:32.641180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.667 12792.50 IOPS, 99.94 MiB/s [2024-11-26T20:39:32.660Z] [2024-11-26 20:39:32.657056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.667 [2024-11-26 20:39:32.657100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.675380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.675426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.689008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.689052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.704826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.704875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.722939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.722982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.738170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.738219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.749523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.749563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.766114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.766186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.783331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.783372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.799335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.925 [2024-11-26 20:39:32.799394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.925 [2024-11-26 20:39:32.816101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.926 [2024-11-26 20:39:32.816143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.926 [2024-11-26 20:39:32.825816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.926 [2024-11-26 20:39:32.825857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.926 [2024-11-26 20:39:32.836106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.926 [2024-11-26 20:39:32.836165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.926 [2024-11-26 20:39:32.853191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.926 [2024-11-26 20:39:32.853230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.926 [2024-11-26 20:39:32.871322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.926 [2024-11-26 20:39:32.871363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.926 [2024-11-26 20:39:32.885226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.926 [2024-11-26 20:39:32.885278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.926 [2024-11-26 20:39:32.901694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.926 [2024-11-26 20:39:32.901735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:32.918479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:32.918521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:32.936310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:32.936364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:32.950235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:32.950272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:32.966858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:32.966901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:32.982002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:32.982055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.000043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.000091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.015493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.015534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.024855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.024895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.041383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.041430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.057963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.058004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.074519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.074560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.091895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.091936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.108854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.108896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.124334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.124386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.143515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.143557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.157711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.157753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.185 [2024-11-26 20:39:33.173742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.185 [2024-11-26 20:39:33.173783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.191922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.191967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.206566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.206609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.217985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.218028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.234715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.234771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.255037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.255090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.271254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.271301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.287131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.287186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.304858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.304904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.320724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.320769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.331952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.331995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.352548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.352600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.368837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.368893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.387710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.387757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.402326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.402368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.419482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.419527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.444 [2024-11-26 20:39:33.435226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.444 [2024-11-26 20:39:33.435271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.446641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.446684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.462474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.462514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.483394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.483445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.512866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.512927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.546865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.546933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.580909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.580994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.613147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.613246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 [2024-11-26 20:39:33.629550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.629598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.702 12291.00 IOPS, 96.02 MiB/s [2024-11-26T20:39:33.695Z] [2024-11-26 20:39:33.647980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.702 [2024-11-26 20:39:33.648028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.703 [2024-11-26 20:39:33.662368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.703 [2024-11-26 20:39:33.662412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.703 [2024-11-26 20:39:33.678826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.703 [2024-11-26 20:39:33.678871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.695042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.695087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.711265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.711317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.729695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.729739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.743680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.743728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.758920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.758965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.777251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.777295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.792037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.792080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.804144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.804199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.820381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.820425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.836068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.836114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.847437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.847481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.864001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.864047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.879967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.880014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.891268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.891317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.908037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.908088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.925093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.925143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.962 [2024-11-26 20:39:33.945387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.962 [2024-11-26 20:39:33.945436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:33.963083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:33.963130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:33.979771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:33.979817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:33.995928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:33.995973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.014160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.014218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.028696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.028747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.045799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.045842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.061723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.061766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.079256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.079297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.094528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.094573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.105968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.106005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.122708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.122758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.138483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.138526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.156283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.156324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.174014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.174059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.189659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.189706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.221 [2024-11-26 20:39:34.207815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.221 [2024-11-26 20:39:34.207862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.222404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.222451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.237557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.237602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.249265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.249307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.265454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.265514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.286268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.286318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.302569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.302619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.323413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.323465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.339191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.339247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.348145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.348209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.364480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.364534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.373577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.373626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.389595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.389648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.400681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.400730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.417132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.417197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.438450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.438502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.453579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.453625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.480 [2024-11-26 20:39:34.471000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.480 [2024-11-26 20:39:34.471049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.485695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.485745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.497235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.497284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.516764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.516813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.533510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.533568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.550590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.550637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.567839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.567891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.583890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.583942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.601579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.601636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.616758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.616832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.632910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.632963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 12369.50 IOPS, 96.64 MiB/s [2024-11-26T20:39:34.733Z] [2024-11-26 20:39:34.649034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.649082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.666464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.666507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.682829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.682869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.700137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.700198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.740 [2024-11-26 20:39:34.715590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.740 [2024-11-26 20:39:34.715642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.741 [2024-11-26 20:39:34.727189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.741 [2024-11-26 20:39:34.727242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.000 [2024-11-26 20:39:34.743452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.000 [2024-11-26 20:39:34.743490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.000 [2024-11-26 20:39:34.760101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.000 [2024-11-26 20:39:34.760142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.000 [2024-11-26 20:39:34.776285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.000 [2024-11-26 20:39:34.776363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.000 [2024-11-26 20:39:34.792854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.000 [2024-11-26 20:39:34.792907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.000 [2024-11-26 20:39:34.813937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.000 [2024-11-26 20:39:34.813987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.831707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.831758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.847772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.847814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.866276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.866327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.881629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.881677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.890557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.890595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.907032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.907079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.924409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.924451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.939970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.940013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.951956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.951997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.967755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.967796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.001 [2024-11-26 20:39:34.983855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.001 [2024-11-26 20:39:34.983897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.000359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.000403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.018583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.018623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.034435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.034479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.052735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.052781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.067227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.067281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.076935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.076986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.092304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.092346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-26 20:39:35.101312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-26 20:39:35.101349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.117513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.117548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.129027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.129061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.145203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.145267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.161421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.161485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.179201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.179259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.194919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.194978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.212829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.212871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.228272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.228313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.261 [2024-11-26 20:39:35.239816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.261 [2024-11-26 20:39:35.239857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.255774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.255815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.273653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.273688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.294214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.294255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.310095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.310387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.326958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.327019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.343986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.344033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.360779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.360818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.381516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.381702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.398998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.399049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.415887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.415933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.431860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.431905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.449787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.449837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.465502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.465542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.483257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.483295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.520 [2024-11-26 20:39:35.500174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.520 [2024-11-26 20:39:35.500212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.515494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.515653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.535684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.535727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.550307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.550346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.567458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.567497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.583453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.583491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.600844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.600879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.618015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.618249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.635020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.635064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 12463.00 IOPS, 97.37 MiB/s [2024-11-26T20:39:35.788Z] [2024-11-26 20:39:35.650780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.650820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 00:11:40.795 Latency(us) 00:11:40.795 [2024-11-26T20:39:35.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.795 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:40.795 Nvme1n1 : 5.01 12464.22 97.38 0.00 0.00 10257.20 3885.35 53427.44 00:11:40.795 [2024-11-26T20:39:35.788Z] =================================================================================================================== 00:11:40.795 [2024-11-26T20:39:35.788Z] Total : 12464.22 97.38 0.00 0.00 10257.20 3885.35 53427.44 00:11:40.795 [2024-11-26 20:39:35.660461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.660492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.672491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.672719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.684513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.684555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.696505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.696544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.708495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.708530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.720500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.720534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.732506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.732758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.744522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.744570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.756513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.756561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.795 [2024-11-26 20:39:35.768515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.795 [2024-11-26 20:39:35.768749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.780534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.780571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.792553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.792586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.804539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.804775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.816542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.816573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.828571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.828618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.840573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.840614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.852554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.852592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.864577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.864619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 [2024-11-26 20:39:35.876554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.076 [2024-11-26 20:39:35.876584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.076 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65846) - No such process 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65846 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.076 delay0 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.076 20:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:41.334 [2024-11-26 20:39:36.104378] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:47.899 Initializing NVMe Controllers 00:11:47.899 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.899 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.899 Initialization complete. Launching workers. 00:11:47.899 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 168 00:11:47.899 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 455, failed to submit 33 00:11:47.899 success 344, unsuccessful 111, failed 0 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.899 rmmod nvme_tcp 00:11:47.899 rmmod nvme_fabrics 00:11:47.899 rmmod nvme_keyring 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65697 ']' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65697 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65697 ']' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65697 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65697 00:11:47.899 killing process with pid 65697 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65697' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65697 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65697 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:47.899 00:11:47.899 real 0m24.786s 00:11:47.899 user 0m39.404s 00:11:47.899 sys 0m8.282s 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.899 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.899 ************************************ 00:11:47.899 END TEST nvmf_zcopy 00:11:47.899 ************************************ 00:11:48.157 20:39:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:48.158 20:39:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.158 20:39:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.158 20:39:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.158 ************************************ 00:11:48.158 START TEST nvmf_nmic 00:11:48.158 ************************************ 00:11:48.158 20:39:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:48.158 * Looking for test storage... 00:11:48.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.158 --rc genhtml_branch_coverage=1 00:11:48.158 --rc genhtml_function_coverage=1 00:11:48.158 --rc genhtml_legend=1 00:11:48.158 --rc geninfo_all_blocks=1 00:11:48.158 --rc geninfo_unexecuted_blocks=1 00:11:48.158 00:11:48.158 ' 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.158 --rc genhtml_branch_coverage=1 00:11:48.158 --rc genhtml_function_coverage=1 00:11:48.158 --rc genhtml_legend=1 00:11:48.158 --rc geninfo_all_blocks=1 00:11:48.158 --rc geninfo_unexecuted_blocks=1 00:11:48.158 00:11:48.158 ' 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.158 --rc genhtml_branch_coverage=1 00:11:48.158 --rc genhtml_function_coverage=1 00:11:48.158 --rc genhtml_legend=1 00:11:48.158 --rc geninfo_all_blocks=1 00:11:48.158 --rc geninfo_unexecuted_blocks=1 00:11:48.158 00:11:48.158 ' 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.158 --rc genhtml_branch_coverage=1 00:11:48.158 --rc genhtml_function_coverage=1 00:11:48.158 --rc genhtml_legend=1 00:11:48.158 --rc geninfo_all_blocks=1 00:11:48.158 --rc geninfo_unexecuted_blocks=1 00:11:48.158 00:11:48.158 ' 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.158 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.416 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:48.416 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:48.417 Cannot find device "nvmf_init_br" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:48.417 Cannot find device "nvmf_init_br2" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:48.417 Cannot find device "nvmf_tgt_br" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:48.417 Cannot find device "nvmf_tgt_br2" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:48.417 Cannot find device "nvmf_init_br" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:48.417 Cannot find device "nvmf_init_br2" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:48.417 Cannot find device "nvmf_tgt_br" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:48.417 Cannot find device "nvmf_tgt_br2" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:48.417 Cannot find device "nvmf_br" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:48.417 Cannot find device "nvmf_init_if" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:48.417 Cannot find device "nvmf_init_if2" 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:48.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:48.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:48.417 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:48.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:48.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:11:48.676 00:11:48.676 --- 10.0.0.3 ping statistics --- 00:11:48.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.676 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:48.676 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:48.676 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:11:48.676 00:11:48.676 --- 10.0.0.4 ping statistics --- 00:11:48.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.676 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:48.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:48.676 00:11:48.676 --- 10.0.0.1 ping statistics --- 00:11:48.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.676 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:48.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:11:48.676 00:11:48.676 --- 10.0.0.2 ping statistics --- 00:11:48.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.676 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.676 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66232 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66232 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66232 ']' 00:11:48.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.935 20:39:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.935 [2024-11-26 20:39:43.735740] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:48.935 [2024-11-26 20:39:43.735858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.192 [2024-11-26 20:39:43.950646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.192 [2024-11-26 20:39:44.036418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.192 [2024-11-26 20:39:44.036484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.192 [2024-11-26 20:39:44.036509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.192 [2024-11-26 20:39:44.036519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.192 [2024-11-26 20:39:44.036527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.192 [2024-11-26 20:39:44.037845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.192 [2024-11-26 20:39:44.037984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.192 [2024-11-26 20:39:44.037984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.192 [2024-11-26 20:39:44.037923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.192 [2024-11-26 20:39:44.119770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 [2024-11-26 20:39:44.870992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 Malloc0 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 [2024-11-26 20:39:44.944597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:50.128 test case1: single bdev can't be used in multiple subsystems 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 [2024-11-26 20:39:44.968400] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:50.128 [2024-11-26 20:39:44.968608] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:50.128 [2024-11-26 20:39:44.968833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.128 request: 00:11:50.128 { 00:11:50.128 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:50.128 "namespace": { 00:11:50.128 "bdev_name": "Malloc0", 00:11:50.128 "no_auto_visible": false, 00:11:50.128 "hide_metadata": false 00:11:50.128 }, 00:11:50.128 "method": "nvmf_subsystem_add_ns", 00:11:50.128 "req_id": 1 00:11:50.128 } 00:11:50.128 Got JSON-RPC error response 00:11:50.128 response: 00:11:50.128 { 00:11:50.128 "code": -32602, 00:11:50.128 "message": "Invalid parameters" 00:11:50.128 } 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:50.128 Adding namespace failed - expected result. 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:50.128 test case2: host connect to nvmf target in multiple paths 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:50.128 [2024-11-26 20:39:44.980607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.128 20:39:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:50.386 20:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:50.386 20:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.386 20:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.386 20:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.386 20:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.386 20:39:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.295 20:39:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.295 20:39:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.295 20:39:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.554 20:39:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.554 20:39:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.554 20:39:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:52.554 20:39:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:52.555 [global] 00:11:52.555 thread=1 00:11:52.555 invalidate=1 00:11:52.555 rw=write 00:11:52.555 time_based=1 00:11:52.555 runtime=1 00:11:52.555 ioengine=libaio 00:11:52.555 direct=1 00:11:52.555 bs=4096 00:11:52.555 iodepth=1 00:11:52.555 norandommap=0 00:11:52.555 numjobs=1 00:11:52.555 00:11:52.555 verify_dump=1 00:11:52.555 verify_backlog=512 00:11:52.555 verify_state_save=0 00:11:52.555 do_verify=1 00:11:52.555 verify=crc32c-intel 00:11:52.555 [job0] 00:11:52.555 filename=/dev/nvme0n1 00:11:52.555 Could not set queue depth (nvme0n1) 00:11:52.555 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:52.555 fio-3.35 00:11:52.555 Starting 1 thread 00:11:53.934 00:11:53.934 job0: (groupid=0, jobs=1): err= 0: pid=66328: Tue Nov 26 20:39:48 2024 00:11:53.934 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:53.934 slat (nsec): min=7915, max=48646, avg=10878.60, stdev=3353.27 00:11:53.934 clat (usec): min=118, max=517, avg=179.57, stdev=21.23 00:11:53.934 lat (usec): min=126, max=527, avg=190.45, stdev=21.69 00:11:53.934 clat percentiles (usec): 00:11:53.934 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:11:53.934 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:11:53.934 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 217], 00:11:53.934 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 260], 99.95th=[ 269], 00:11:53.934 | 99.99th=[ 519] 00:11:53.934 write: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:11:53.934 slat (usec): min=11, max=118, avg=16.23, stdev= 6.39 00:11:53.934 clat (usec): min=74, max=1467, avg=110.28, stdev=51.86 00:11:53.934 lat (usec): min=90, max=1489, avg=126.51, stdev=53.10 00:11:53.934 clat percentiles (usec): 00:11:53.934 | 1.00th=[ 80], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 95], 00:11:53.934 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 111], 00:11:53.934 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 135], 00:11:53.934 | 99.00th=[ 159], 99.50th=[ 219], 99.90th=[ 1188], 99.95th=[ 1319], 00:11:53.934 | 99.99th=[ 1467] 00:11:53.934 bw ( KiB/s): min=12688, max=12688, per=98.55%, avg=12688.00, stdev= 0.00, samples=1 00:11:53.934 iops : min= 3172, max= 3172, avg=3172.00, stdev= 0.00, samples=1 00:11:53.934 lat (usec) : 100=16.44%, 250=83.19%, 500=0.19%, 750=0.08%, 1000=0.03% 00:11:53.934 lat (msec) : 2=0.06% 00:11:53.934 cpu : usr=1.30%, sys=7.50%, ctx=6295, majf=0, minf=5 00:11:53.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.934 issued rwts: total=3072,3222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.934 00:11:53.934 Run status group 0 (all jobs): 00:11:53.934 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:53.934 WRITE: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=12.6MiB (13.2MB), run=1001-1001msec 00:11:53.934 00:11:53.934 Disk stats (read/write): 00:11:53.934 nvme0n1: ios=2685/3072, merge=0/0, ticks=503/363, in_queue=866, util=91.48% 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.934 rmmod nvme_tcp 00:11:53.934 rmmod nvme_fabrics 00:11:53.934 rmmod nvme_keyring 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66232 ']' 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66232 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66232 ']' 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66232 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66232 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66232' 00:11:53.934 killing process with pid 66232 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66232 00:11:53.934 20:39:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66232 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:54.194 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:54.453 00:11:54.453 real 0m6.514s 00:11:54.453 user 0m18.958s 00:11:54.453 sys 0m2.984s 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.453 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.453 ************************************ 00:11:54.453 END TEST nvmf_nmic 00:11:54.453 ************************************ 00:11:54.712 20:39:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.712 20:39:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.712 20:39:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.712 20:39:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.712 ************************************ 00:11:54.712 START TEST nvmf_fio_target 00:11:54.712 ************************************ 00:11:54.712 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.712 * Looking for test storage... 00:11:54.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:54.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.713 --rc genhtml_branch_coverage=1 00:11:54.713 --rc genhtml_function_coverage=1 00:11:54.713 --rc genhtml_legend=1 00:11:54.713 --rc geninfo_all_blocks=1 00:11:54.713 --rc geninfo_unexecuted_blocks=1 00:11:54.713 00:11:54.713 ' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:54.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.713 --rc genhtml_branch_coverage=1 00:11:54.713 --rc genhtml_function_coverage=1 00:11:54.713 --rc genhtml_legend=1 00:11:54.713 --rc geninfo_all_blocks=1 00:11:54.713 --rc geninfo_unexecuted_blocks=1 00:11:54.713 00:11:54.713 ' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:54.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.713 --rc genhtml_branch_coverage=1 00:11:54.713 --rc genhtml_function_coverage=1 00:11:54.713 --rc genhtml_legend=1 00:11:54.713 --rc geninfo_all_blocks=1 00:11:54.713 --rc geninfo_unexecuted_blocks=1 00:11:54.713 00:11:54.713 ' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:54.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.713 --rc genhtml_branch_coverage=1 00:11:54.713 --rc genhtml_function_coverage=1 00:11:54.713 --rc genhtml_legend=1 00:11:54.713 --rc geninfo_all_blocks=1 00:11:54.713 --rc geninfo_unexecuted_blocks=1 00:11:54.713 00:11:54.713 ' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.713 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.713 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:54.972 Cannot find device "nvmf_init_br" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:54.972 Cannot find device "nvmf_init_br2" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:54.972 Cannot find device "nvmf_tgt_br" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.972 Cannot find device "nvmf_tgt_br2" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:54.972 Cannot find device "nvmf_init_br" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:54.972 Cannot find device "nvmf_init_br2" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:54.972 Cannot find device "nvmf_tgt_br" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:54.972 Cannot find device "nvmf_tgt_br2" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:54.972 Cannot find device "nvmf_br" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:54.972 Cannot find device "nvmf_init_if" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:54.972 Cannot find device "nvmf_init_if2" 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:54.972 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:55.231 20:39:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:55.231 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:55.232 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.232 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:11:55.232 00:11:55.232 --- 10.0.0.3 ping statistics --- 00:11:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.232 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:55.232 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:55.232 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:11:55.232 00:11:55.232 --- 10.0.0.4 ping statistics --- 00:11:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.232 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:11:55.232 00:11:55.232 --- 10.0.0.1 ping statistics --- 00:11:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.232 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:55.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:11:55.232 00:11:55.232 --- 10.0.0.2 ping statistics --- 00:11:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.232 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66558 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66558 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66558 ']' 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.232 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.491 [2024-11-26 20:39:50.246355] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:55.491 [2024-11-26 20:39:50.246719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.491 [2024-11-26 20:39:50.416542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.749 [2024-11-26 20:39:50.504447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.749 [2024-11-26 20:39:50.504735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.749 [2024-11-26 20:39:50.504937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.749 [2024-11-26 20:39:50.505144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.749 [2024-11-26 20:39:50.505289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.749 [2024-11-26 20:39:50.507144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.749 [2024-11-26 20:39:50.507252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.749 [2024-11-26 20:39:50.507316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.749 [2024-11-26 20:39:50.507337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.749 [2024-11-26 20:39:50.607837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.749 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.749 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:55.749 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.749 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.749 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.007 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.007 20:39:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:56.265 [2024-11-26 20:39:51.090663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.265 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.533 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:56.534 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.825 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:56.825 20:39:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.083 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:57.083 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.341 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:57.341 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:57.908 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.165 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:58.165 20:39:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.423 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:58.423 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.680 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:58.680 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:58.939 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.197 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:59.197 20:39:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.455 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:59.455 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.714 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:59.714 [2024-11-26 20:39:54.662997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:59.714 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:59.972 20:39:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:00.231 20:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:00.490 20:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:00.490 20:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.490 20:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.490 20:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:00.490 20:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:00.490 20:39:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.393 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.393 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.393 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.393 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:02.393 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.393 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:02.393 20:39:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:02.393 [global] 00:12:02.393 thread=1 00:12:02.393 invalidate=1 00:12:02.393 rw=write 00:12:02.393 time_based=1 00:12:02.393 runtime=1 00:12:02.393 ioengine=libaio 00:12:02.393 direct=1 00:12:02.393 bs=4096 00:12:02.393 iodepth=1 00:12:02.393 norandommap=0 00:12:02.393 numjobs=1 00:12:02.393 00:12:02.393 verify_dump=1 00:12:02.393 verify_backlog=512 00:12:02.393 verify_state_save=0 00:12:02.393 do_verify=1 00:12:02.393 verify=crc32c-intel 00:12:02.393 [job0] 00:12:02.393 filename=/dev/nvme0n1 00:12:02.393 [job1] 00:12:02.393 filename=/dev/nvme0n2 00:12:02.652 [job2] 00:12:02.652 filename=/dev/nvme0n3 00:12:02.652 [job3] 00:12:02.652 filename=/dev/nvme0n4 00:12:02.652 Could not set queue depth (nvme0n1) 00:12:02.652 Could not set queue depth (nvme0n2) 00:12:02.652 Could not set queue depth (nvme0n3) 00:12:02.652 Could not set queue depth (nvme0n4) 00:12:02.652 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.652 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.652 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.652 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.652 fio-3.35 00:12:02.652 Starting 4 threads 00:12:04.028 00:12:04.028 job0: (groupid=0, jobs=1): err= 0: pid=66747: Tue Nov 26 20:39:58 2024 00:12:04.028 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:04.028 slat (usec): min=6, max=201, avg=11.26, stdev= 5.53 00:12:04.028 clat (usec): min=56, max=3915, avg=272.56, stdev=148.11 00:12:04.028 lat (usec): min=151, max=3924, avg=283.82, stdev=149.04 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 186], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:12:04.028 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:12:04.028 | 70.00th=[ 258], 80.00th=[ 334], 90.00th=[ 400], 95.00th=[ 437], 00:12:04.028 | 99.00th=[ 502], 99.50th=[ 545], 99.90th=[ 2933], 99.95th=[ 3785], 00:12:04.028 | 99.99th=[ 3916] 00:12:04.028 write: IOPS=2092, BW=8372KiB/s (8573kB/s)(8380KiB/1001msec); 0 zone resets 00:12:04.028 slat (usec): min=8, max=118, avg=14.88, stdev= 5.00 00:12:04.028 clat (usec): min=76, max=4208, avg=182.49, stdev=124.23 00:12:04.028 lat (usec): min=89, max=4228, avg=197.37, stdev=124.50 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 86], 5.00th=[ 102], 10.00th=[ 149], 20.00th=[ 163], 00:12:04.028 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:12:04.028 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 225], 00:12:04.028 | 99.00th=[ 249], 99.50th=[ 273], 99.90th=[ 1614], 99.95th=[ 3523], 00:12:04.028 | 99.99th=[ 4228] 00:12:04.028 bw ( KiB/s): min=10400, max=10400, per=31.59%, avg=10400.00, stdev= 0.00, samples=1 00:12:04.028 iops : min= 2600, max= 2600, avg=2600.00, stdev= 0.00, samples=1 00:12:04.028 lat (usec) : 100=2.39%, 250=80.57%, 500=16.41%, 750=0.43%, 1000=0.05% 00:12:04.028 lat (msec) : 2=0.02%, 4=0.10%, 10=0.02% 00:12:04.028 cpu : usr=1.70%, sys=4.50%, ctx=4144, majf=0, minf=11 00:12:04.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.028 issued rwts: total=2048,2095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.028 job1: (groupid=0, jobs=1): err= 0: pid=66748: Tue Nov 26 20:39:58 2024 00:12:04.028 read: IOPS=1886, BW=7544KiB/s (7726kB/s)(7552KiB/1001msec) 00:12:04.028 slat (nsec): min=7431, max=36912, avg=10758.56, stdev=2593.37 00:12:04.028 clat (usec): min=112, max=1066, avg=293.51, stdev=77.97 00:12:04.028 lat (usec): min=121, max=1077, avg=304.27, stdev=78.08 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 155], 5.00th=[ 208], 10.00th=[ 223], 20.00th=[ 237], 00:12:04.028 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 297], 00:12:04.028 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 400], 95.00th=[ 429], 00:12:04.028 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 742], 99.95th=[ 1074], 00:12:04.028 | 99.99th=[ 1074] 00:12:04.028 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:04.028 slat (usec): min=11, max=117, avg=20.02, stdev= 8.97 00:12:04.028 clat (usec): min=84, max=363, avg=185.19, stdev=46.79 00:12:04.028 lat (usec): min=99, max=463, avg=205.21, stdev=49.01 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 98], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 139], 00:12:04.028 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:12:04.028 | 70.00th=[ 202], 80.00th=[ 217], 90.00th=[ 239], 95.00th=[ 260], 00:12:04.028 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 359], 00:12:04.028 | 99.99th=[ 363] 00:12:04.028 bw ( KiB/s): min= 8592, max= 8592, per=26.10%, avg=8592.00, stdev= 0.00, samples=1 00:12:04.028 iops : min= 2148, max= 2148, avg=2148.00, stdev= 0.00, samples=1 00:12:04.028 lat (usec) : 100=0.64%, 250=63.74%, 500=34.45%, 750=1.14% 00:12:04.028 lat (msec) : 2=0.03% 00:12:04.028 cpu : usr=1.20%, sys=5.20%, ctx=3945, majf=0, minf=7 00:12:04.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.028 issued rwts: total=1888,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.028 job2: (groupid=0, jobs=1): err= 0: pid=66749: Tue Nov 26 20:39:58 2024 00:12:04.028 read: IOPS=2044, BW=8180KiB/s (8376kB/s)(8188KiB/1001msec) 00:12:04.028 slat (nsec): min=6167, max=52946, avg=11117.29, stdev=4096.04 00:12:04.028 clat (usec): min=133, max=3974, avg=275.89, stdev=172.85 00:12:04.028 lat (usec): min=155, max=3990, avg=287.01, stdev=173.85 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:12:04.028 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:12:04.028 | 70.00th=[ 258], 80.00th=[ 326], 90.00th=[ 400], 95.00th=[ 441], 00:12:04.028 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 3687], 99.95th=[ 3752], 00:12:04.028 | 99.99th=[ 3982] 00:12:04.028 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:04.028 slat (nsec): min=9403, max=86948, avg=16441.13, stdev=5206.79 00:12:04.028 clat (usec): min=105, max=517, avg=182.48, stdev=27.10 00:12:04.028 lat (usec): min=117, max=532, avg=198.92, stdev=26.99 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 133], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 163], 00:12:04.028 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:12:04.028 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 215], 95.00th=[ 237], 00:12:04.028 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 343], 99.95th=[ 343], 00:12:04.028 | 99.99th=[ 519] 00:12:04.028 bw ( KiB/s): min=10384, max=10384, per=31.54%, avg=10384.00, stdev= 0.00, samples=1 00:12:04.028 iops : min= 2596, max= 2596, avg=2596.00, stdev= 0.00, samples=1 00:12:04.028 lat (usec) : 250=82.22%, 500=16.73%, 750=0.90% 00:12:04.028 lat (msec) : 2=0.02%, 4=0.12% 00:12:04.028 cpu : usr=1.30%, sys=5.30%, ctx=4097, majf=0, minf=13 00:12:04.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.028 issued rwts: total=2047,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.028 job3: (groupid=0, jobs=1): err= 0: pid=66750: Tue Nov 26 20:39:58 2024 00:12:04.028 read: IOPS=1694, BW=6777KiB/s (6940kB/s)(6784KiB/1001msec) 00:12:04.028 slat (usec): min=6, max=458, avg=14.11, stdev=11.80 00:12:04.028 clat (usec): min=140, max=2360, avg=302.82, stdev=94.57 00:12:04.028 lat (usec): min=153, max=2373, avg=316.93, stdev=94.97 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 198], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 239], 00:12:04.028 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 302], 00:12:04.028 | 70.00th=[ 338], 80.00th=[ 375], 90.00th=[ 424], 95.00th=[ 461], 00:12:04.028 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 775], 99.95th=[ 2376], 00:12:04.028 | 99.99th=[ 2376] 00:12:04.028 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:04.028 slat (usec): min=10, max=117, avg=24.80, stdev= 8.80 00:12:04.028 clat (usec): min=97, max=643, avg=197.97, stdev=60.35 00:12:04.028 lat (usec): min=115, max=660, avg=222.76, stdev=63.65 00:12:04.028 clat percentiles (usec): 00:12:04.028 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 151], 00:12:04.028 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 196], 00:12:04.028 | 70.00th=[ 212], 80.00th=[ 241], 90.00th=[ 297], 95.00th=[ 318], 00:12:04.028 | 99.00th=[ 347], 99.50th=[ 379], 99.90th=[ 441], 99.95th=[ 445], 00:12:04.028 | 99.99th=[ 644] 00:12:04.028 bw ( KiB/s): min= 8192, max= 8192, per=24.88%, avg=8192.00, stdev= 0.00, samples=1 00:12:04.028 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:04.028 lat (usec) : 100=0.08%, 250=60.23%, 500=39.08%, 750=0.56%, 1000=0.03% 00:12:04.028 lat (msec) : 4=0.03% 00:12:04.028 cpu : usr=1.90%, sys=5.90%, ctx=3744, majf=0, minf=15 00:12:04.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.029 issued rwts: total=1696,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.029 00:12:04.029 Run status group 0 (all jobs): 00:12:04.029 READ: bw=30.0MiB/s (31.4MB/s), 6777KiB/s-8184KiB/s (6940kB/s-8380kB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:12:04.029 WRITE: bw=32.2MiB/s (33.7MB/s), 8184KiB/s-8372KiB/s (8380kB/s-8573kB/s), io=32.2MiB (33.7MB), run=1001-1001msec 00:12:04.029 00:12:04.029 Disk stats (read/write): 00:12:04.029 nvme0n1: ios=1740/2048, merge=0/0, ticks=418/341, in_queue=759, util=86.36% 00:12:04.029 nvme0n2: ios=1580/1995, merge=0/0, ticks=448/378, in_queue=826, util=87.79% 00:12:04.029 nvme0n3: ios=1666/2048, merge=0/0, ticks=398/378, in_queue=776, util=88.30% 00:12:04.029 nvme0n4: ios=1536/1648, merge=0/0, ticks=455/337, in_queue=792, util=89.67% 00:12:04.029 20:39:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:04.029 [global] 00:12:04.029 thread=1 00:12:04.029 invalidate=1 00:12:04.029 rw=randwrite 00:12:04.029 time_based=1 00:12:04.029 runtime=1 00:12:04.029 ioengine=libaio 00:12:04.029 direct=1 00:12:04.029 bs=4096 00:12:04.029 iodepth=1 00:12:04.029 norandommap=0 00:12:04.029 numjobs=1 00:12:04.029 00:12:04.029 verify_dump=1 00:12:04.029 verify_backlog=512 00:12:04.029 verify_state_save=0 00:12:04.029 do_verify=1 00:12:04.029 verify=crc32c-intel 00:12:04.029 [job0] 00:12:04.029 filename=/dev/nvme0n1 00:12:04.029 [job1] 00:12:04.029 filename=/dev/nvme0n2 00:12:04.029 [job2] 00:12:04.029 filename=/dev/nvme0n3 00:12:04.029 [job3] 00:12:04.029 filename=/dev/nvme0n4 00:12:04.029 Could not set queue depth (nvme0n1) 00:12:04.029 Could not set queue depth (nvme0n2) 00:12:04.029 Could not set queue depth (nvme0n3) 00:12:04.029 Could not set queue depth (nvme0n4) 00:12:04.029 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.029 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.029 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.029 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.029 fio-3.35 00:12:04.029 Starting 4 threads 00:12:05.497 00:12:05.497 job0: (groupid=0, jobs=1): err= 0: pid=66803: Tue Nov 26 20:40:00 2024 00:12:05.497 read: IOPS=2445, BW=9782KiB/s (10.0MB/s)(9792KiB/1001msec) 00:12:05.497 slat (nsec): min=6702, max=44401, avg=10365.70, stdev=2736.01 00:12:05.497 clat (usec): min=131, max=518, avg=229.33, stdev=72.16 00:12:05.497 lat (usec): min=143, max=532, avg=239.70, stdev=72.10 00:12:05.497 clat percentiles (usec): 00:12:05.497 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:12:05.497 | 30.00th=[ 165], 40.00th=[ 182], 50.00th=[ 223], 60.00th=[ 239], 00:12:05.497 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 330], 95.00th=[ 343], 00:12:05.497 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 420], 99.95th=[ 465], 00:12:05.497 | 99.99th=[ 519] 00:12:05.497 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:05.497 slat (usec): min=8, max=316, avg=21.42, stdev=18.36 00:12:05.497 clat (usec): min=3, max=7444, avg=137.15, stdev=187.32 00:12:05.497 lat (usec): min=101, max=7471, avg=158.57, stdev=187.52 00:12:05.497 clat percentiles (usec): 00:12:05.497 | 1.00th=[ 91], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 111], 00:12:05.497 | 30.00th=[ 115], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 130], 00:12:05.497 | 70.00th=[ 137], 80.00th=[ 149], 90.00th=[ 165], 95.00th=[ 178], 00:12:05.497 | 99.00th=[ 210], 99.50th=[ 265], 99.90th=[ 3261], 99.95th=[ 3785], 00:12:05.497 | 99.99th=[ 7439] 00:12:05.497 bw ( KiB/s): min=12288, max=12288, per=31.56%, avg=12288.00, stdev= 0.00, samples=1 00:12:05.497 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:05.497 lat (usec) : 4=0.18%, 10=0.10%, 20=0.02%, 50=0.02%, 100=1.48% 00:12:05.497 lat (usec) : 250=80.65%, 500=17.37%, 750=0.04%, 1000=0.04% 00:12:05.497 lat (msec) : 2=0.02%, 4=0.06%, 10=0.02% 00:12:05.497 cpu : usr=1.30%, sys=7.20%, ctx=5059, majf=0, minf=13 00:12:05.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.497 issued rwts: total=2448,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.497 job1: (groupid=0, jobs=1): err= 0: pid=66804: Tue Nov 26 20:40:00 2024 00:12:05.497 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:05.497 slat (nsec): min=6617, max=40104, avg=12030.21, stdev=3599.89 00:12:05.497 clat (usec): min=156, max=3955, avg=288.73, stdev=128.28 00:12:05.497 lat (usec): min=169, max=3973, avg=300.76, stdev=128.78 00:12:05.497 clat percentiles (usec): 00:12:05.497 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 237], 00:12:05.497 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 293], 00:12:05.497 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 363], 00:12:05.497 | 99.00th=[ 498], 99.50th=[ 824], 99.90th=[ 1942], 99.95th=[ 2638], 00:12:05.497 | 99.99th=[ 3949] 00:12:05.497 write: IOPS=2060, BW=8244KiB/s (8442kB/s)(8252KiB/1001msec); 0 zone resets 00:12:05.497 slat (usec): min=8, max=132, avg=16.63, stdev= 5.82 00:12:05.497 clat (usec): min=105, max=338, avg=166.66, stdev=34.66 00:12:05.497 lat (usec): min=119, max=470, avg=183.29, stdev=36.46 00:12:05.497 clat percentiles (usec): 00:12:05.497 | 1.00th=[ 115], 5.00th=[ 122], 10.00th=[ 127], 20.00th=[ 135], 00:12:05.497 | 30.00th=[ 141], 40.00th=[ 151], 50.00th=[ 161], 60.00th=[ 174], 00:12:05.497 | 70.00th=[ 186], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 227], 00:12:05.497 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 314], 00:12:05.497 | 99.99th=[ 338] 00:12:05.497 bw ( KiB/s): min= 8192, max= 8192, per=21.04%, avg=8192.00, stdev= 0.00, samples=1 00:12:05.497 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:05.497 lat (usec) : 250=66.43%, 500=33.08%, 750=0.19%, 1000=0.15% 00:12:05.497 lat (msec) : 2=0.10%, 4=0.05% 00:12:05.497 cpu : usr=1.90%, sys=4.80%, ctx=4114, majf=0, minf=15 00:12:05.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.497 issued rwts: total=2048,2063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.497 job2: (groupid=0, jobs=1): err= 0: pid=66805: Tue Nov 26 20:40:00 2024 00:12:05.497 read: IOPS=2148, BW=8595KiB/s (8802kB/s)(8604KiB/1001msec) 00:12:05.497 slat (nsec): min=6720, max=48251, avg=11190.66, stdev=3307.97 00:12:05.497 clat (usec): min=143, max=4038, avg=255.93, stdev=107.47 00:12:05.497 lat (usec): min=151, max=4073, avg=267.12, stdev=108.35 00:12:05.497 clat percentiles (usec): 00:12:05.497 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 182], 00:12:05.497 | 30.00th=[ 223], 40.00th=[ 237], 50.00th=[ 251], 60.00th=[ 269], 00:12:05.497 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 347], 00:12:05.497 | 99.00th=[ 379], 99.50th=[ 457], 99.90th=[ 930], 99.95th=[ 1467], 00:12:05.497 | 99.99th=[ 4047] 00:12:05.497 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:05.497 slat (usec): min=8, max=150, avg=17.55, stdev= 7.17 00:12:05.497 clat (usec): min=64, max=390, avg=146.41, stdev=30.90 00:12:05.497 lat (usec): min=94, max=424, avg=163.95, stdev=33.85 00:12:05.497 clat percentiles (usec): 00:12:05.497 | 1.00th=[ 99], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 121], 00:12:05.497 | 30.00th=[ 130], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 149], 00:12:05.497 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 184], 95.00th=[ 200], 00:12:05.497 | 99.00th=[ 249], 99.50th=[ 285], 99.90th=[ 359], 99.95th=[ 375], 00:12:05.497 | 99.99th=[ 392] 00:12:05.497 bw ( KiB/s): min=12288, max=12288, per=31.56%, avg=12288.00, stdev= 0.00, samples=1 00:12:05.497 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:05.497 lat (usec) : 100=0.62%, 250=75.48%, 500=23.75%, 750=0.06%, 1000=0.04% 00:12:05.497 lat (msec) : 2=0.02%, 10=0.02% 00:12:05.497 cpu : usr=1.20%, sys=6.40%, ctx=4717, majf=0, minf=5 00:12:05.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.497 issued rwts: total=2151,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.497 job3: (groupid=0, jobs=1): err= 0: pid=66806: Tue Nov 26 20:40:00 2024 00:12:05.497 read: IOPS=2114, BW=8460KiB/s (8663kB/s)(8468KiB/1001msec) 00:12:05.498 slat (nsec): min=6590, max=56147, avg=11374.03, stdev=3866.35 00:12:05.498 clat (usec): min=144, max=2611, avg=245.13, stdev=101.16 00:12:05.498 lat (usec): min=153, max=2625, avg=256.50, stdev=101.39 00:12:05.498 clat percentiles (usec): 00:12:05.498 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 186], 00:12:05.498 | 30.00th=[ 208], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 249], 00:12:05.498 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 330], 00:12:05.498 | 99.00th=[ 400], 99.50th=[ 611], 99.90th=[ 1532], 99.95th=[ 1942], 00:12:05.498 | 99.99th=[ 2606] 00:12:05.498 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:05.498 slat (nsec): min=9013, max=58111, avg=16834.97, stdev=5454.26 00:12:05.498 clat (usec): min=104, max=271, avg=159.30, stdev=29.25 00:12:05.498 lat (usec): min=118, max=290, avg=176.13, stdev=31.12 00:12:05.498 clat percentiles (usec): 00:12:05.498 | 1.00th=[ 117], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 135], 00:12:05.498 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 00:12:05.498 | 70.00th=[ 169], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 217], 00:12:05.498 | 99.00th=[ 235], 99.50th=[ 245], 99.90th=[ 265], 99.95th=[ 269], 00:12:05.498 | 99.99th=[ 273] 00:12:05.498 bw ( KiB/s): min=11112, max=11112, per=28.54%, avg=11112.00, stdev= 0.00, samples=1 00:12:05.498 iops : min= 2778, max= 2778, avg=2778.00, stdev= 0.00, samples=1 00:12:05.498 lat (usec) : 250=81.98%, 500=17.73%, 750=0.11%, 1000=0.06% 00:12:05.498 lat (msec) : 2=0.11%, 4=0.02% 00:12:05.498 cpu : usr=1.40%, sys=6.00%, ctx=4679, majf=0, minf=13 00:12:05.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.498 issued rwts: total=2117,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.498 00:12:05.498 Run status group 0 (all jobs): 00:12:05.498 READ: bw=34.2MiB/s (35.9MB/s), 8184KiB/s-9782KiB/s (8380kB/s-10.0MB/s), io=34.2MiB (35.9MB), run=1001-1001msec 00:12:05.498 WRITE: bw=38.0MiB/s (39.9MB/s), 8244KiB/s-9.99MiB/s (8442kB/s-10.5MB/s), io=38.1MiB (39.9MB), run=1001-1001msec 00:12:05.498 00:12:05.498 Disk stats (read/write): 00:12:05.498 nvme0n1: ios=2098/2470, merge=0/0, ticks=469/333, in_queue=802, util=87.96% 00:12:05.498 nvme0n2: ios=1649/2048, merge=0/0, ticks=461/317, in_queue=778, util=88.57% 00:12:05.498 nvme0n3: ios=1995/2048, merge=0/0, ticks=535/292, in_queue=827, util=89.59% 00:12:05.498 nvme0n4: ios=1927/2048, merge=0/0, ticks=464/326, in_queue=790, util=89.52% 00:12:05.498 20:40:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:05.498 [global] 00:12:05.498 thread=1 00:12:05.498 invalidate=1 00:12:05.498 rw=write 00:12:05.498 time_based=1 00:12:05.498 runtime=1 00:12:05.498 ioengine=libaio 00:12:05.498 direct=1 00:12:05.498 bs=4096 00:12:05.498 iodepth=128 00:12:05.498 norandommap=0 00:12:05.498 numjobs=1 00:12:05.498 00:12:05.498 verify_dump=1 00:12:05.498 verify_backlog=512 00:12:05.498 verify_state_save=0 00:12:05.498 do_verify=1 00:12:05.498 verify=crc32c-intel 00:12:05.498 [job0] 00:12:05.498 filename=/dev/nvme0n1 00:12:05.498 [job1] 00:12:05.498 filename=/dev/nvme0n2 00:12:05.498 [job2] 00:12:05.498 filename=/dev/nvme0n3 00:12:05.498 [job3] 00:12:05.498 filename=/dev/nvme0n4 00:12:05.498 Could not set queue depth (nvme0n1) 00:12:05.498 Could not set queue depth (nvme0n2) 00:12:05.498 Could not set queue depth (nvme0n3) 00:12:05.498 Could not set queue depth (nvme0n4) 00:12:05.498 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.498 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.498 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.498 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.498 fio-3.35 00:12:05.498 Starting 4 threads 00:12:06.873 00:12:06.873 job0: (groupid=0, jobs=1): err= 0: pid=66861: Tue Nov 26 20:40:01 2024 00:12:06.873 read: IOPS=2150, BW=8602KiB/s (8808kB/s)(8636KiB/1004msec) 00:12:06.873 slat (usec): min=4, max=7797, avg=174.40, stdev=729.80 00:12:06.873 clat (usec): min=686, max=36443, avg=21068.69, stdev=4576.76 00:12:06.873 lat (usec): min=4549, max=36467, avg=21243.09, stdev=4623.24 00:12:06.873 clat percentiles (usec): 00:12:06.873 | 1.00th=[ 4948], 5.00th=[15533], 10.00th=[16581], 20.00th=[17433], 00:12:06.873 | 30.00th=[17957], 40.00th=[19792], 50.00th=[21627], 60.00th=[22676], 00:12:06.873 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[27919], 00:12:06.873 | 99.00th=[31065], 99.50th=[33817], 99.90th=[35914], 99.95th=[35914], 00:12:06.873 | 99.99th=[36439] 00:12:06.873 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:12:06.873 slat (usec): min=10, max=5470, avg=234.79, stdev=746.93 00:12:06.873 clat (usec): min=14236, max=51434, avg=31747.35, stdev=9146.23 00:12:06.873 lat (usec): min=14256, max=51483, avg=31982.14, stdev=9197.21 00:12:06.873 clat percentiles (usec): 00:12:06.873 | 1.00th=[14353], 5.00th=[17433], 10.00th=[17695], 20.00th=[23725], 00:12:06.873 | 30.00th=[28443], 40.00th=[30540], 50.00th=[31065], 60.00th=[32637], 00:12:06.873 | 70.00th=[36439], 80.00th=[40109], 90.00th=[44827], 95.00th=[47449], 00:12:06.873 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:12:06.873 | 99.99th=[51643] 00:12:06.873 bw ( KiB/s): min= 9384, max=10960, per=18.92%, avg=10172.00, stdev=1114.40, samples=2 00:12:06.873 iops : min= 2346, max= 2740, avg=2543.00, stdev=278.60, samples=2 00:12:06.873 lat (usec) : 750=0.02% 00:12:06.873 lat (msec) : 10=1.00%, 20=27.55%, 50=70.84%, 100=0.59% 00:12:06.873 cpu : usr=1.79%, sys=8.67%, ctx=368, majf=0, minf=1 00:12:06.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:06.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.873 issued rwts: total=2159,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.873 job1: (groupid=0, jobs=1): err= 0: pid=66862: Tue Nov 26 20:40:01 2024 00:12:06.873 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:12:06.873 slat (usec): min=10, max=7279, avg=164.07, stdev=833.55 00:12:06.873 clat (usec): min=12568, max=30807, avg=20338.61, stdev=3617.47 00:12:06.873 lat (usec): min=14129, max=30829, avg=20502.68, stdev=3574.39 00:12:06.873 clat percentiles (usec): 00:12:06.873 | 1.00th=[14222], 5.00th=[15008], 10.00th=[16909], 20.00th=[17695], 00:12:06.873 | 30.00th=[18744], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:12:06.873 | 70.00th=[20579], 80.00th=[21890], 90.00th=[27132], 95.00th=[28967], 00:12:06.873 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:12:06.873 | 99.99th=[30802] 00:12:06.873 write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1003msec); 0 zone resets 00:12:06.873 slat (usec): min=12, max=6389, avg=129.96, stdev=610.44 00:12:06.873 clat (usec): min=2662, max=27993, avg=17975.85, stdev=3925.53 00:12:06.873 lat (usec): min=2683, max=28022, avg=18105.81, stdev=3878.09 00:12:06.873 clat percentiles (usec): 00:12:06.874 | 1.00th=[ 7242], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:12:06.874 | 30.00th=[15008], 40.00th=[17171], 50.00th=[18482], 60.00th=[19268], 00:12:06.874 | 70.00th=[19792], 80.00th=[20317], 90.00th=[22152], 95.00th=[25035], 00:12:06.874 | 99.00th=[27919], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:12:06.874 | 99.99th=[27919] 00:12:06.874 bw ( KiB/s): min=13632, max=13635, per=25.35%, avg=13633.50, stdev= 2.12, samples=2 00:12:06.874 iops : min= 3408, max= 3408, avg=3408.00, stdev= 0.00, samples=2 00:12:06.874 lat (msec) : 4=0.33%, 10=0.50%, 20=65.41%, 50=33.76% 00:12:06.874 cpu : usr=4.09%, sys=10.48%, ctx=208, majf=0, minf=4 00:12:06.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:06.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.874 issued rwts: total=3072,3542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.874 job2: (groupid=0, jobs=1): err= 0: pid=66863: Tue Nov 26 20:40:01 2024 00:12:06.874 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:12:06.874 slat (usec): min=4, max=6817, avg=161.27, stdev=811.74 00:12:06.874 clat (usec): min=14862, max=27902, avg=21791.34, stdev=2528.61 00:12:06.874 lat (usec): min=18940, max=27940, avg=21952.61, stdev=2406.53 00:12:06.874 clat percentiles (usec): 00:12:06.874 | 1.00th=[15926], 5.00th=[19006], 10.00th=[19268], 20.00th=[19792], 00:12:06.874 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21103], 60.00th=[21890], 00:12:06.874 | 70.00th=[22676], 80.00th=[23987], 90.00th=[26346], 95.00th=[26870], 00:12:06.874 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:12:06.874 | 99.99th=[27919] 00:12:06.874 write: IOPS=3303, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1003msec); 0 zone resets 00:12:06.874 slat (usec): min=12, max=7639, avg=142.05, stdev=675.69 00:12:06.874 clat (usec): min=2694, max=26539, avg=18008.59, stdev=2977.98 00:12:06.874 lat (usec): min=2716, max=26566, avg=18150.64, stdev=2926.28 00:12:06.874 clat percentiles (usec): 00:12:06.874 | 1.00th=[ 7046], 5.00th=[14353], 10.00th=[14615], 20.00th=[15401], 00:12:06.874 | 30.00th=[16909], 40.00th=[17433], 50.00th=[18744], 60.00th=[19268], 00:12:06.874 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20841], 95.00th=[22152], 00:12:06.874 | 99.00th=[25035], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:12:06.874 | 99.99th=[26608] 00:12:06.874 bw ( KiB/s): min=12312, max=13152, per=23.68%, avg=12732.00, stdev=593.97, samples=2 00:12:06.874 iops : min= 3078, max= 3288, avg=3183.00, stdev=148.49, samples=2 00:12:06.874 lat (msec) : 4=0.27%, 10=0.50%, 20=52.33%, 50=46.91% 00:12:06.874 cpu : usr=3.19%, sys=10.78%, ctx=224, majf=0, minf=5 00:12:06.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:06.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.874 issued rwts: total=3072,3313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.874 job3: (groupid=0, jobs=1): err= 0: pid=66864: Tue Nov 26 20:40:01 2024 00:12:06.874 read: IOPS=3853, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1005msec) 00:12:06.874 slat (usec): min=4, max=6841, avg=131.87, stdev=696.60 00:12:06.874 clat (usec): min=393, max=25220, avg=16687.78, stdev=3297.25 00:12:06.874 lat (usec): min=4838, max=25230, avg=16819.65, stdev=3248.61 00:12:06.874 clat percentiles (usec): 00:12:06.874 | 1.00th=[10290], 5.00th=[12649], 10.00th=[14091], 20.00th=[15008], 00:12:06.874 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:12:06.874 | 70.00th=[16450], 80.00th=[19792], 90.00th=[22676], 95.00th=[23725], 00:12:06.874 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:12:06.874 | 99.99th=[25297] 00:12:06.874 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:12:06.874 slat (usec): min=9, max=5794, avg=114.91, stdev=581.20 00:12:06.874 clat (usec): min=9225, max=23957, avg=15116.66, stdev=3412.39 00:12:06.874 lat (usec): min=11709, max=23976, avg=15231.57, stdev=3382.36 00:12:06.874 clat percentiles (usec): 00:12:06.874 | 1.00th=[10028], 5.00th=[11863], 10.00th=[11994], 20.00th=[12387], 00:12:06.874 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13960], 60.00th=[15401], 00:12:06.874 | 70.00th=[16319], 80.00th=[16909], 90.00th=[20841], 95.00th=[23200], 00:12:06.874 | 99.00th=[23987], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:12:06.874 | 99.99th=[23987] 00:12:06.874 bw ( KiB/s): min=16384, max=16384, per=30.47%, avg=16384.00, stdev= 0.00, samples=2 00:12:06.874 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:12:06.874 lat (usec) : 500=0.01% 00:12:06.874 lat (msec) : 10=0.89%, 20=84.10%, 50=15.00% 00:12:06.874 cpu : usr=2.59%, sys=6.77%, ctx=254, majf=0, minf=5 00:12:06.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:06.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.874 issued rwts: total=3873,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.874 00:12:06.874 Run status group 0 (all jobs): 00:12:06.874 READ: bw=47.3MiB/s (49.6MB/s), 8602KiB/s-15.1MiB/s (8808kB/s-15.8MB/s), io=47.6MiB (49.9MB), run=1003-1005msec 00:12:06.874 WRITE: bw=52.5MiB/s (55.1MB/s), 9.96MiB/s-15.9MiB/s (10.4MB/s-16.7MB/s), io=52.8MiB (55.3MB), run=1003-1005msec 00:12:06.874 00:12:06.874 Disk stats (read/write): 00:12:06.874 nvme0n1: ios=2098/2063, merge=0/0, ticks=14447/19808, in_queue=34255, util=87.26% 00:12:06.874 nvme0n2: ios=2588/3008, merge=0/0, ticks=12718/11642, in_queue=24360, util=87.07% 00:12:06.874 nvme0n3: ios=2560/2784, merge=0/0, ticks=12722/11502, in_queue=24224, util=88.48% 00:12:06.874 nvme0n4: ios=3072/3520, merge=0/0, ticks=12938/12442, in_queue=25380, util=89.67% 00:12:06.874 20:40:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:06.874 [global] 00:12:06.874 thread=1 00:12:06.874 invalidate=1 00:12:06.874 rw=randwrite 00:12:06.874 time_based=1 00:12:06.874 runtime=1 00:12:06.874 ioengine=libaio 00:12:06.874 direct=1 00:12:06.874 bs=4096 00:12:06.874 iodepth=128 00:12:06.874 norandommap=0 00:12:06.874 numjobs=1 00:12:06.874 00:12:06.874 verify_dump=1 00:12:06.874 verify_backlog=512 00:12:06.874 verify_state_save=0 00:12:06.874 do_verify=1 00:12:06.874 verify=crc32c-intel 00:12:06.874 [job0] 00:12:06.874 filename=/dev/nvme0n1 00:12:06.874 [job1] 00:12:06.874 filename=/dev/nvme0n2 00:12:06.874 [job2] 00:12:06.874 filename=/dev/nvme0n3 00:12:06.874 [job3] 00:12:06.874 filename=/dev/nvme0n4 00:12:06.874 Could not set queue depth (nvme0n1) 00:12:06.874 Could not set queue depth (nvme0n2) 00:12:06.874 Could not set queue depth (nvme0n3) 00:12:06.874 Could not set queue depth (nvme0n4) 00:12:06.874 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.874 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.874 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.874 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.874 fio-3.35 00:12:06.874 Starting 4 threads 00:12:08.253 00:12:08.253 job0: (groupid=0, jobs=1): err= 0: pid=66928: Tue Nov 26 20:40:03 2024 00:12:08.253 read: IOPS=4891, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1005msec) 00:12:08.253 slat (usec): min=5, max=7939, avg=96.92, stdev=403.50 00:12:08.253 clat (usec): min=3251, max=30768, avg=12766.13, stdev=3858.03 00:12:08.253 lat (usec): min=5725, max=30996, avg=12863.05, stdev=3890.76 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[ 9372], 5.00th=[10421], 10.00th=[10814], 20.00th=[11076], 00:12:08.253 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:12:08.253 | 70.00th=[12125], 80.00th=[12518], 90.00th=[14091], 95.00th=[24249], 00:12:08.253 | 99.00th=[28181], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:12:08.253 | 99.99th=[30802] 00:12:08.253 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:08.253 slat (usec): min=10, max=3225, avg=93.37, stdev=378.22 00:12:08.253 clat (usec): min=8693, max=30023, avg=12533.43, stdev=3902.30 00:12:08.253 lat (usec): min=8714, max=30700, avg=12626.81, stdev=3940.48 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:12:08.253 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:12:08.253 | 70.00th=[11731], 80.00th=[11994], 90.00th=[15533], 95.00th=[23200], 00:12:08.253 | 99.00th=[27919], 99.50th=[28705], 99.90th=[29754], 99.95th=[30016], 00:12:08.253 | 99.99th=[30016] 00:12:08.253 bw ( KiB/s): min=18328, max=22677, per=31.53%, avg=20502.50, stdev=3075.21, samples=2 00:12:08.253 iops : min= 4582, max= 5669, avg=5125.50, stdev=768.63, samples=2 00:12:08.253 lat (msec) : 4=0.01%, 10=2.58%, 20=88.69%, 50=8.72% 00:12:08.253 cpu : usr=4.18%, sys=14.64%, ctx=609, majf=0, minf=3 00:12:08.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:08.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.253 issued rwts: total=4916,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.253 job1: (groupid=0, jobs=1): err= 0: pid=66929: Tue Nov 26 20:40:03 2024 00:12:08.253 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:12:08.253 slat (usec): min=9, max=6080, avg=151.91, stdev=807.26 00:12:08.253 clat (usec): min=8612, max=26039, avg=20077.25, stdev=6268.81 00:12:08.253 lat (usec): min=10658, max=26062, avg=20229.16, stdev=6266.08 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[ 9765], 5.00th=[10945], 10.00th=[11207], 20.00th=[11338], 00:12:08.253 | 30.00th=[11994], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:12:08.253 | 70.00th=[24511], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:12:08.253 | 99.00th=[25822], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:12:08.253 | 99.99th=[26084] 00:12:08.253 write: IOPS=3322, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec); 0 zone resets 00:12:08.253 slat (usec): min=13, max=5857, avg=150.97, stdev=737.96 00:12:08.253 clat (usec): min=216, max=25014, avg=19344.04, stdev=6086.30 00:12:08.253 lat (usec): min=2061, max=25067, avg=19495.00, stdev=6083.91 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[ 4948], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:12:08.253 | 30.00th=[13304], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:12:08.253 | 70.00th=[23462], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:12:08.253 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:12:08.253 | 99.99th=[25035] 00:12:08.253 bw ( KiB/s): min=12192, max=12192, per=18.75%, avg=12192.00, stdev= 0.00, samples=1 00:12:08.253 iops : min= 3048, max= 3048, avg=3048.00, stdev= 0.00, samples=1 00:12:08.253 lat (usec) : 250=0.02% 00:12:08.253 lat (msec) : 4=0.50%, 10=1.94%, 20=32.42%, 50=65.13% 00:12:08.253 cpu : usr=4.20%, sys=8.89%, ctx=202, majf=0, minf=12 00:12:08.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:08.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.253 issued rwts: total=3072,3329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.253 job2: (groupid=0, jobs=1): err= 0: pid=66930: Tue Nov 26 20:40:03 2024 00:12:08.253 read: IOPS=4725, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1002msec) 00:12:08.253 slat (usec): min=6, max=4974, avg=98.19, stdev=392.76 00:12:08.253 clat (usec): min=559, max=17466, avg=12989.89, stdev=1382.20 00:12:08.253 lat (usec): min=2246, max=17482, avg=13088.08, stdev=1410.23 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[ 6390], 5.00th=[11469], 10.00th=[12256], 20.00th=[12518], 00:12:08.253 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:12:08.253 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14615], 95.00th=[15139], 00:12:08.253 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17433], 99.95th=[17433], 00:12:08.253 | 99.99th=[17433] 00:12:08.253 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:12:08.253 slat (usec): min=11, max=3605, avg=95.83, stdev=439.32 00:12:08.253 clat (usec): min=9810, max=16835, avg=12691.18, stdev=927.40 00:12:08.253 lat (usec): min=9828, max=16851, avg=12787.01, stdev=1011.83 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[10552], 5.00th=[11469], 10.00th=[11863], 20.00th=[12125], 00:12:08.253 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:12:08.253 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13566], 95.00th=[14746], 00:12:08.253 | 99.00th=[15926], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:12:08.253 | 99.99th=[16909] 00:12:08.253 bw ( KiB/s): min=20472, max=20521, per=31.52%, avg=20496.50, stdev=34.65, samples=2 00:12:08.253 iops : min= 5118, max= 5130, avg=5124.00, stdev= 8.49, samples=2 00:12:08.253 lat (usec) : 750=0.01% 00:12:08.253 lat (msec) : 4=0.17%, 10=0.70%, 20=99.12% 00:12:08.253 cpu : usr=4.60%, sys=13.99%, ctx=404, majf=0, minf=3 00:12:08.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:08.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.253 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.253 job3: (groupid=0, jobs=1): err= 0: pid=66931: Tue Nov 26 20:40:03 2024 00:12:08.253 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:08.253 slat (usec): min=5, max=5978, avg=192.26, stdev=874.36 00:12:08.253 clat (usec): min=17329, max=32888, avg=24843.22, stdev=1977.78 00:12:08.253 lat (usec): min=17346, max=32941, avg=25035.48, stdev=1806.61 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[19006], 5.00th=[21627], 10.00th=[23725], 20.00th=[23987], 00:12:08.253 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:12:08.253 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26870], 95.00th=[28967], 00:12:08.253 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[32900], 00:12:08.253 | 99.99th=[32900] 00:12:08.253 write: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1004msec); 0 zone resets 00:12:08.253 slat (usec): min=10, max=5581, avg=172.81, stdev=791.71 00:12:08.253 clat (usec): min=3037, max=29642, avg=22701.36, stdev=2920.34 00:12:08.253 lat (usec): min=5779, max=29919, avg=22874.17, stdev=2824.20 00:12:08.253 clat percentiles (usec): 00:12:08.253 | 1.00th=[ 6652], 5.00th=[17171], 10.00th=[19530], 20.00th=[22676], 00:12:08.253 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23462], 60.00th=[23462], 00:12:08.253 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:12:08.253 | 99.00th=[28181], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:12:08.253 | 99.99th=[29754] 00:12:08.253 bw ( KiB/s): min= 9024, max=12136, per=16.27%, avg=10580.00, stdev=2200.52, samples=2 00:12:08.253 iops : min= 2256, max= 3034, avg=2645.00, stdev=550.13, samples=2 00:12:08.253 lat (msec) : 4=0.02%, 10=0.75%, 20=6.68%, 50=92.55% 00:12:08.253 cpu : usr=3.59%, sys=8.67%, ctx=402, majf=0, minf=5 00:12:08.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:08.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.253 issued rwts: total=2560,2770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.253 00:12:08.253 Run status group 0 (all jobs): 00:12:08.254 READ: bw=59.4MiB/s (62.3MB/s), 9.96MiB/s-19.1MiB/s (10.4MB/s-20.0MB/s), io=59.7MiB (62.6MB), run=1002-1005msec 00:12:08.254 WRITE: bw=63.5MiB/s (66.6MB/s), 10.8MiB/s-20.0MiB/s (11.3MB/s-20.9MB/s), io=63.8MiB (66.9MB), run=1002-1005msec 00:12:08.254 00:12:08.254 Disk stats (read/write): 00:12:08.254 nvme0n1: ios=4654/4610, merge=0/0, ticks=17532/14122, in_queue=31654, util=87.46% 00:12:08.254 nvme0n2: ios=2161/2560, merge=0/0, ticks=12071/12610, in_queue=24681, util=88.22% 00:12:08.254 nvme0n3: ios=4096/4228, merge=0/0, ticks=17087/15041, in_queue=32128, util=88.73% 00:12:08.254 nvme0n4: ios=2048/2485, merge=0/0, ticks=11750/12936, in_queue=24686, util=89.47% 00:12:08.254 20:40:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:08.254 20:40:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66945 00:12:08.254 20:40:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:08.254 20:40:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:08.254 [global] 00:12:08.254 thread=1 00:12:08.254 invalidate=1 00:12:08.254 rw=read 00:12:08.254 time_based=1 00:12:08.254 runtime=10 00:12:08.254 ioengine=libaio 00:12:08.254 direct=1 00:12:08.254 bs=4096 00:12:08.254 iodepth=1 00:12:08.254 norandommap=1 00:12:08.254 numjobs=1 00:12:08.254 00:12:08.254 [job0] 00:12:08.254 filename=/dev/nvme0n1 00:12:08.254 [job1] 00:12:08.254 filename=/dev/nvme0n2 00:12:08.254 [job2] 00:12:08.254 filename=/dev/nvme0n3 00:12:08.254 [job3] 00:12:08.254 filename=/dev/nvme0n4 00:12:08.254 Could not set queue depth (nvme0n1) 00:12:08.254 Could not set queue depth (nvme0n2) 00:12:08.254 Could not set queue depth (nvme0n3) 00:12:08.254 Could not set queue depth (nvme0n4) 00:12:08.512 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.512 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.512 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.512 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.512 fio-3.35 00:12:08.512 Starting 4 threads 00:12:11.798 20:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:11.798 fio: pid=66988, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.798 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39718912, buflen=4096 00:12:11.798 20:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:11.798 fio: pid=66987, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.798 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=60329984, buflen=4096 00:12:11.798 20:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.798 20:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:12.056 fio: pid=66985, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:12.056 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=48615424, buflen=4096 00:12:12.056 20:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.056 20:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:12.315 fio: pid=66986, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:12.315 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54669312, buflen=4096 00:12:12.315 00:12:12.315 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66985: Tue Nov 26 20:40:07 2024 00:12:12.315 read: IOPS=3460, BW=13.5MiB/s (14.2MB/s)(46.4MiB/3430msec) 00:12:12.315 slat (usec): min=5, max=13362, avg=14.44, stdev=179.47 00:12:12.315 clat (usec): min=111, max=3534, avg=273.37, stdev=90.64 00:12:12.315 lat (usec): min=124, max=13597, avg=287.81, stdev=201.40 00:12:12.315 clat percentiles (usec): 00:12:12.315 | 1.00th=[ 141], 5.00th=[ 196], 10.00th=[ 217], 20.00th=[ 233], 00:12:12.315 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 273], 00:12:12.315 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:12:12.315 | 99.00th=[ 392], 99.50th=[ 537], 99.90th=[ 1270], 99.95th=[ 2278], 00:12:12.315 | 99.99th=[ 3458] 00:12:12.315 bw ( KiB/s): min=11888, max=15640, per=25.59%, avg=13709.33, stdev=1754.63, samples=6 00:12:12.315 iops : min= 2972, max= 3910, avg=3427.33, stdev=438.66, samples=6 00:12:12.315 lat (usec) : 250=39.50%, 500=59.92%, 750=0.34%, 1000=0.08% 00:12:12.315 lat (msec) : 2=0.09%, 4=0.06% 00:12:12.315 cpu : usr=0.96%, sys=3.59%, ctx=11875, majf=0, minf=1 00:12:12.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 issued rwts: total=11870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.315 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66986: Tue Nov 26 20:40:07 2024 00:12:12.315 read: IOPS=3601, BW=14.1MiB/s (14.8MB/s)(52.1MiB/3706msec) 00:12:12.315 slat (usec): min=7, max=13291, avg=18.00, stdev=209.89 00:12:12.315 clat (usec): min=7, max=4048, avg=258.34, stdev=105.82 00:12:12.315 lat (usec): min=111, max=13566, avg=276.34, stdev=235.67 00:12:12.315 clat percentiles (usec): 00:12:12.315 | 1.00th=[ 117], 5.00th=[ 128], 10.00th=[ 141], 20.00th=[ 215], 00:12:12.315 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 269], 00:12:12.315 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 351], 00:12:12.315 | 99.00th=[ 502], 99.50th=[ 545], 99.90th=[ 1029], 99.95th=[ 2606], 00:12:12.315 | 99.99th=[ 4015] 00:12:12.315 bw ( KiB/s): min=11368, max=17104, per=26.09%, avg=13979.43, stdev=2229.37, samples=7 00:12:12.315 iops : min= 2842, max= 4276, avg=3494.86, stdev=557.34, samples=7 00:12:12.315 lat (usec) : 10=0.01%, 250=46.78%, 500=52.12%, 750=0.91%, 1000=0.07% 00:12:12.315 lat (msec) : 2=0.03%, 4=0.07%, 10=0.01% 00:12:12.315 cpu : usr=1.16%, sys=4.37%, ctx=13356, majf=0, minf=1 00:12:12.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 issued rwts: total=13348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.315 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66987: Tue Nov 26 20:40:07 2024 00:12:12.315 read: IOPS=4611, BW=18.0MiB/s (18.9MB/s)(57.5MiB/3194msec) 00:12:12.315 slat (usec): min=7, max=8879, avg=11.02, stdev=95.54 00:12:12.315 clat (usec): min=116, max=2958, avg=204.78, stdev=74.53 00:12:12.315 lat (usec): min=124, max=9485, avg=215.80, stdev=123.74 00:12:12.315 clat percentiles (usec): 00:12:12.315 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:12:12.315 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 235], 00:12:12.315 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 281], 00:12:12.315 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 586], 99.95th=[ 1663], 00:12:12.315 | 99.99th=[ 2638] 00:12:12.315 bw ( KiB/s): min=14688, max=23520, per=35.15%, avg=18836.00, stdev=4316.53, samples=6 00:12:12.315 iops : min= 3672, max= 5880, avg=4709.00, stdev=1079.13, samples=6 00:12:12.315 lat (usec) : 250=70.61%, 500=29.27%, 750=0.03%, 1000=0.01% 00:12:12.315 lat (msec) : 2=0.02%, 4=0.04% 00:12:12.315 cpu : usr=0.97%, sys=4.38%, ctx=14735, majf=0, minf=2 00:12:12.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 issued rwts: total=14730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.315 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66988: Tue Nov 26 20:40:07 2024 00:12:12.315 read: IOPS=3306, BW=12.9MiB/s (13.5MB/s)(37.9MiB/2933msec) 00:12:12.315 slat (usec): min=6, max=101, avg=12.09, stdev= 4.73 00:12:12.315 clat (usec): min=136, max=7790, avg=289.02, stdev=166.20 00:12:12.315 lat (usec): min=148, max=7803, avg=301.11, stdev=166.41 00:12:12.315 clat percentiles (usec): 00:12:12.315 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:12:12.315 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 289], 00:12:12.315 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:12:12.315 | 99.00th=[ 379], 99.50th=[ 445], 99.90th=[ 3163], 99.95th=[ 4178], 00:12:12.315 | 99.99th=[ 7767] 00:12:12.315 bw ( KiB/s): min=11624, max=15168, per=25.21%, avg=13505.60, stdev=1642.59, samples=5 00:12:12.315 iops : min= 2906, max= 3792, avg=3376.40, stdev=410.65, samples=5 00:12:12.315 lat (usec) : 250=24.33%, 500=75.24%, 750=0.22%, 1000=0.01% 00:12:12.315 lat (msec) : 2=0.05%, 4=0.06%, 10=0.07% 00:12:12.315 cpu : usr=0.78%, sys=4.06%, ctx=9698, majf=0, minf=2 00:12:12.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.315 issued rwts: total=9698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.315 00:12:12.315 Run status group 0 (all jobs): 00:12:12.315 READ: bw=52.3MiB/s (54.9MB/s), 12.9MiB/s-18.0MiB/s (13.5MB/s-18.9MB/s), io=194MiB (203MB), run=2933-3706msec 00:12:12.315 00:12:12.315 Disk stats (read/write): 00:12:12.315 nvme0n1: ios=11569/0, merge=0/0, ticks=3145/0, in_queue=3145, util=94.99% 00:12:12.315 nvme0n2: ios=12664/0, merge=0/0, ticks=3353/0, in_queue=3353, util=95.02% 00:12:12.315 nvme0n3: ios=14437/0, merge=0/0, ticks=2957/0, in_queue=2957, util=96.26% 00:12:12.315 nvme0n4: ios=9473/0, merge=0/0, ticks=2657/0, in_queue=2657, util=96.04% 00:12:12.315 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.315 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:12.574 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.574 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:12.833 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.833 20:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:13.400 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.400 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:13.659 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.659 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66945 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.917 nvmf hotplug test: fio failed as expected 00:12:13.917 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:13.918 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:13.918 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:13.918 20:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.183 rmmod nvme_tcp 00:12:14.183 rmmod nvme_fabrics 00:12:14.183 rmmod nvme_keyring 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66558 ']' 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66558 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66558 ']' 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66558 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66558 00:12:14.183 killing process with pid 66558 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66558' 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66558 00:12:14.183 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66558 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:14.477 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:14.736 ************************************ 00:12:14.736 END TEST nvmf_fio_target 00:12:14.736 ************************************ 00:12:14.736 00:12:14.736 real 0m20.194s 00:12:14.736 user 1m14.876s 00:12:14.736 sys 0m11.202s 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.736 20:40:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:14.996 ************************************ 00:12:14.996 START TEST nvmf_bdevio 00:12:14.996 ************************************ 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:14.996 * Looking for test storage... 00:12:14.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.996 --rc genhtml_branch_coverage=1 00:12:14.996 --rc genhtml_function_coverage=1 00:12:14.996 --rc genhtml_legend=1 00:12:14.996 --rc geninfo_all_blocks=1 00:12:14.996 --rc geninfo_unexecuted_blocks=1 00:12:14.996 00:12:14.996 ' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.996 --rc genhtml_branch_coverage=1 00:12:14.996 --rc genhtml_function_coverage=1 00:12:14.996 --rc genhtml_legend=1 00:12:14.996 --rc geninfo_all_blocks=1 00:12:14.996 --rc geninfo_unexecuted_blocks=1 00:12:14.996 00:12:14.996 ' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.996 --rc genhtml_branch_coverage=1 00:12:14.996 --rc genhtml_function_coverage=1 00:12:14.996 --rc genhtml_legend=1 00:12:14.996 --rc geninfo_all_blocks=1 00:12:14.996 --rc geninfo_unexecuted_blocks=1 00:12:14.996 00:12:14.996 ' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.996 --rc genhtml_branch_coverage=1 00:12:14.996 --rc genhtml_function_coverage=1 00:12:14.996 --rc genhtml_legend=1 00:12:14.996 --rc geninfo_all_blocks=1 00:12:14.996 --rc geninfo_unexecuted_blocks=1 00:12:14.996 00:12:14.996 ' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.996 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.997 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.997 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:15.255 Cannot find device "nvmf_init_br" 00:12:15.255 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:15.255 20:40:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:15.255 Cannot find device "nvmf_init_br2" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:15.255 Cannot find device "nvmf_tgt_br" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.255 Cannot find device "nvmf_tgt_br2" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:15.255 Cannot find device "nvmf_init_br" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:15.255 Cannot find device "nvmf_init_br2" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:15.255 Cannot find device "nvmf_tgt_br" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:15.255 Cannot find device "nvmf_tgt_br2" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:15.255 Cannot find device "nvmf_br" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:15.255 Cannot find device "nvmf_init_if" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:15.255 Cannot find device "nvmf_init_if2" 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.255 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:15.512 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.512 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.158 ms 00:12:15.512 00:12:15.512 --- 10.0.0.3 ping statistics --- 00:12:15.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.512 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:15.512 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:15.512 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:12:15.512 00:12:15.512 --- 10.0.0.4 ping statistics --- 00:12:15.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.512 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:15.512 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:15.512 00:12:15.512 --- 10.0.0.1 ping statistics --- 00:12:15.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.513 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:15.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:12:15.513 00:12:15.513 --- 10.0.0.2 ping statistics --- 00:12:15.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.513 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67317 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67317 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67317 ']' 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.513 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.513 [2024-11-26 20:40:10.473004] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:15.513 [2024-11-26 20:40:10.473102] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.770 [2024-11-26 20:40:10.628226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.770 [2024-11-26 20:40:10.699789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.770 [2024-11-26 20:40:10.699852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.770 [2024-11-26 20:40:10.699865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.770 [2024-11-26 20:40:10.699878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.770 [2024-11-26 20:40:10.699888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.770 [2024-11-26 20:40:10.701566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:15.770 [2024-11-26 20:40:10.701644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:15.770 [2024-11-26 20:40:10.701815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:15.770 [2024-11-26 20:40:10.701917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.029 [2024-11-26 20:40:10.785670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 [2024-11-26 20:40:10.923481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 Malloc0 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.029 [2024-11-26 20:40:10.992093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:16.029 20:40:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:16.029 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:16.029 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:16.029 { 00:12:16.029 "params": { 00:12:16.029 "name": "Nvme$subsystem", 00:12:16.029 "trtype": "$TEST_TRANSPORT", 00:12:16.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.029 "adrfam": "ipv4", 00:12:16.029 "trsvcid": "$NVMF_PORT", 00:12:16.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.029 "hdgst": ${hdgst:-false}, 00:12:16.029 "ddgst": ${ddgst:-false} 00:12:16.029 }, 00:12:16.029 "method": "bdev_nvme_attach_controller" 00:12:16.029 } 00:12:16.029 EOF 00:12:16.029 )") 00:12:16.029 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:16.029 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:16.029 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:16.029 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:16.029 "params": { 00:12:16.029 "name": "Nvme1", 00:12:16.029 "trtype": "tcp", 00:12:16.029 "traddr": "10.0.0.3", 00:12:16.029 "adrfam": "ipv4", 00:12:16.029 "trsvcid": "4420", 00:12:16.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.029 "hdgst": false, 00:12:16.029 "ddgst": false 00:12:16.029 }, 00:12:16.029 "method": "bdev_nvme_attach_controller" 00:12:16.029 }' 00:12:16.289 [2024-11-26 20:40:11.061595] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:16.289 [2024-11-26 20:40:11.061703] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67351 ] 00:12:16.289 [2024-11-26 20:40:11.225150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.547 [2024-11-26 20:40:11.290114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.547 [2024-11-26 20:40:11.289992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.547 [2024-11-26 20:40:11.290113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.547 [2024-11-26 20:40:11.347077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:16.547 I/O targets: 00:12:16.547 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:16.547 00:12:16.547 00:12:16.547 CUnit - A unit testing framework for C - Version 2.1-3 00:12:16.547 http://cunit.sourceforge.net/ 00:12:16.547 00:12:16.547 00:12:16.547 Suite: bdevio tests on: Nvme1n1 00:12:16.547 Test: blockdev write read block ...passed 00:12:16.548 Test: blockdev write zeroes read block ...passed 00:12:16.548 Test: blockdev write zeroes read no split ...passed 00:12:16.548 Test: blockdev write zeroes read split ...passed 00:12:16.548 Test: blockdev write zeroes read split partial ...passed 00:12:16.548 Test: blockdev reset ...[2024-11-26 20:40:11.521852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:16.548 [2024-11-26 20:40:11.521970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c9190 (9): Bad file descriptor 00:12:16.548 [2024-11-26 20:40:11.534534] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:16.548 passed 00:12:16.548 Test: blockdev write read 8 blocks ...passed 00:12:16.548 Test: blockdev write read size > 128k ...passed 00:12:16.548 Test: blockdev write read invalid size ...passed 00:12:16.548 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:16.548 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:16.548 Test: blockdev write read max offset ...passed 00:12:16.548 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:16.548 Test: blockdev writev readv 8 blocks ...passed 00:12:16.548 Test: blockdev writev readv 30 x 1block ...passed 00:12:16.548 Test: blockdev writev readv block ...passed 00:12:16.806 Test: blockdev writev readv size > 128k ...passed 00:12:16.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:16.806 Test: blockdev comparev and writev ...[2024-11-26 20:40:11.541827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.806 [2024-11-26 20:40:11.541873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:16.806 [2024-11-26 20:40:11.541892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.806 [2024-11-26 20:40:11.541904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:16.806 [2024-11-26 20:40:11.542287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.806 [2024-11-26 20:40:11.542312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:16.806 [2024-11-26 20:40:11.542330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.806 [2024-11-26 20:40:11.542341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:16.806 [2024-11-26 20:40:11.542678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.806 [2024-11-26 20:40:11.542706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:16.806 [2024-11-26 20:40:11.542725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.806 [2024-11-26 20:40:11.542737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:16.806 [2024-11-26 20:40:11.543077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.806 [2024-11-26 20:40:11.543107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:16.806 [2024-11-26 20:40:11.543128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:16.807 [2024-11-26 20:40:11.543142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:16.807 passed 00:12:16.807 Test: blockdev nvme passthru rw ...passed 00:12:16.807 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:40:11.544030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:16.807 [2024-11-26 20:40:11.544070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:16.807 [2024-11-26 20:40:11.544207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:16.807 [2024-11-26 20:40:11.544227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:16.807 [2024-11-26 20:40:11.544334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:16.807 [2024-11-26 20:40:11.544357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:16.807 [2024-11-26 20:40:11.544454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:16.807 [2024-11-26 20:40:11.544492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:16.807 passed 00:12:16.807 Test: blockdev nvme admin passthru ...passed 00:12:16.807 Test: blockdev copy ...passed 00:12:16.807 00:12:16.807 Run Summary: Type Total Ran Passed Failed Inactive 00:12:16.807 suites 1 1 n/a 0 0 00:12:16.807 tests 23 23 23 0 0 00:12:16.807 asserts 152 152 152 0 n/a 00:12:16.807 00:12:16.807 Elapsed time = 0.157 seconds 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.807 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.068 rmmod nvme_tcp 00:12:17.068 rmmod nvme_fabrics 00:12:17.068 rmmod nvme_keyring 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67317 ']' 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67317 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67317 ']' 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67317 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67317 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:17.068 killing process with pid 67317 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67317' 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67317 00:12:17.068 20:40:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67317 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:17.326 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:17.585 00:12:17.585 real 0m2.750s 00:12:17.585 user 0m7.257s 00:12:17.585 sys 0m1.050s 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.585 ************************************ 00:12:17.585 END TEST nvmf_bdevio 00:12:17.585 ************************************ 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:17.585 ************************************ 00:12:17.585 END TEST nvmf_target_core 00:12:17.585 ************************************ 00:12:17.585 00:12:17.585 real 2m41.169s 00:12:17.585 user 6m52.160s 00:12:17.585 sys 1m2.864s 00:12:17.585 20:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.586 20:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:17.586 20:40:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:17.586 20:40:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.586 20:40:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.586 20:40:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:17.845 ************************************ 00:12:17.845 START TEST nvmf_target_extra 00:12:17.845 ************************************ 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:17.845 * Looking for test storage... 00:12:17.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.845 --rc genhtml_branch_coverage=1 00:12:17.845 --rc genhtml_function_coverage=1 00:12:17.845 --rc genhtml_legend=1 00:12:17.845 --rc geninfo_all_blocks=1 00:12:17.845 --rc geninfo_unexecuted_blocks=1 00:12:17.845 00:12:17.845 ' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.845 --rc genhtml_branch_coverage=1 00:12:17.845 --rc genhtml_function_coverage=1 00:12:17.845 --rc genhtml_legend=1 00:12:17.845 --rc geninfo_all_blocks=1 00:12:17.845 --rc geninfo_unexecuted_blocks=1 00:12:17.845 00:12:17.845 ' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.845 --rc genhtml_branch_coverage=1 00:12:17.845 --rc genhtml_function_coverage=1 00:12:17.845 --rc genhtml_legend=1 00:12:17.845 --rc geninfo_all_blocks=1 00:12:17.845 --rc geninfo_unexecuted_blocks=1 00:12:17.845 00:12:17.845 ' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.845 --rc genhtml_branch_coverage=1 00:12:17.845 --rc genhtml_function_coverage=1 00:12:17.845 --rc genhtml_legend=1 00:12:17.845 --rc geninfo_all_blocks=1 00:12:17.845 --rc geninfo_unexecuted_blocks=1 00:12:17.845 00:12:17.845 ' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.845 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:17.845 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:17.846 20:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:17.846 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.846 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.846 20:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.846 ************************************ 00:12:17.846 START TEST nvmf_auth_target 00:12:17.846 ************************************ 00:12:17.846 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:18.107 * Looking for test storage... 00:12:18.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.107 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.107 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.107 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.107 --rc genhtml_branch_coverage=1 00:12:18.107 --rc genhtml_function_coverage=1 00:12:18.107 --rc genhtml_legend=1 00:12:18.107 --rc geninfo_all_blocks=1 00:12:18.107 --rc geninfo_unexecuted_blocks=1 00:12:18.107 00:12:18.107 ' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.107 --rc genhtml_branch_coverage=1 00:12:18.107 --rc genhtml_function_coverage=1 00:12:18.107 --rc genhtml_legend=1 00:12:18.107 --rc geninfo_all_blocks=1 00:12:18.107 --rc geninfo_unexecuted_blocks=1 00:12:18.107 00:12:18.107 ' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.107 --rc genhtml_branch_coverage=1 00:12:18.107 --rc genhtml_function_coverage=1 00:12:18.107 --rc genhtml_legend=1 00:12:18.107 --rc geninfo_all_blocks=1 00:12:18.107 --rc geninfo_unexecuted_blocks=1 00:12:18.107 00:12:18.107 ' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.107 --rc genhtml_branch_coverage=1 00:12:18.107 --rc genhtml_function_coverage=1 00:12:18.107 --rc genhtml_legend=1 00:12:18.107 --rc geninfo_all_blocks=1 00:12:18.107 --rc geninfo_unexecuted_blocks=1 00:12:18.107 00:12:18.107 ' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.107 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.108 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:18.368 Cannot find device "nvmf_init_br" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:18.368 Cannot find device "nvmf_init_br2" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:18.368 Cannot find device "nvmf_tgt_br" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.368 Cannot find device "nvmf_tgt_br2" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:18.368 Cannot find device "nvmf_init_br" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:18.368 Cannot find device "nvmf_init_br2" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:18.368 Cannot find device "nvmf_tgt_br" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:18.368 Cannot find device "nvmf_tgt_br2" 00:12:18.368 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:18.369 Cannot find device "nvmf_br" 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:18.369 Cannot find device "nvmf_init_if" 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:18.369 Cannot find device "nvmf_init_if2" 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.369 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:18.629 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:18.629 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:12:18.629 00:12:18.629 --- 10.0.0.3 ping statistics --- 00:12:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.629 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:18.629 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:18.629 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:12:18.629 00:12:18.629 --- 10.0.0.4 ping statistics --- 00:12:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.629 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:18.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:18.629 00:12:18.629 --- 10.0.0.1 ping statistics --- 00:12:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.629 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:18.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:12:18.629 00:12:18.629 --- 10.0.0.2 ping statistics --- 00:12:18.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.629 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.629 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67638 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67638 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67638 ']' 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.889 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.147 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.147 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:19.147 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.147 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.147 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67668 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=01725e8e9ff20f873ed41fa38184e9c4d74129ac48677372 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Akr 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 01725e8e9ff20f873ed41fa38184e9c4d74129ac48677372 0 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 01725e8e9ff20f873ed41fa38184e9c4d74129ac48677372 0 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:19.405 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=01725e8e9ff20f873ed41fa38184e9c4d74129ac48677372 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Akr 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Akr 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Akr 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2484c25935c16e694fa91b7c9e60f99f89765d334c12456cbc1dbe24f07e54de 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Bho 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2484c25935c16e694fa91b7c9e60f99f89765d334c12456cbc1dbe24f07e54de 3 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2484c25935c16e694fa91b7c9e60f99f89765d334c12456cbc1dbe24f07e54de 3 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2484c25935c16e694fa91b7c9e60f99f89765d334c12456cbc1dbe24f07e54de 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Bho 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Bho 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Bho 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9818107374e53c642ff904b67f8f4d31 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xDg 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9818107374e53c642ff904b67f8f4d31 1 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9818107374e53c642ff904b67f8f4d31 1 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9818107374e53c642ff904b67f8f4d31 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xDg 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xDg 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.xDg 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:19.406 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b68a74e84b2f2c947612c99896f0b93335ffb0f66be1edb1 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zYK 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b68a74e84b2f2c947612c99896f0b93335ffb0f66be1edb1 2 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b68a74e84b2f2c947612c99896f0b93335ffb0f66be1edb1 2 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b68a74e84b2f2c947612c99896f0b93335ffb0f66be1edb1 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zYK 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zYK 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.zYK 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:19.664 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5a42656131a2a57a73c6decb99c3e51afe2b36742882e048 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IsR 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5a42656131a2a57a73c6decb99c3e51afe2b36742882e048 2 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5a42656131a2a57a73c6decb99c3e51afe2b36742882e048 2 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5a42656131a2a57a73c6decb99c3e51afe2b36742882e048 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IsR 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IsR 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.IsR 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6c9a4d788b7cec82712cd241c8581a7c 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Zgq 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6c9a4d788b7cec82712cd241c8581a7c 1 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6c9a4d788b7cec82712cd241c8581a7c 1 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6c9a4d788b7cec82712cd241c8581a7c 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Zgq 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Zgq 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Zgq 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4c5a4e692b93e76cf3dcb7245f92fa6a777a11853f533bce42660beb474f2509 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0Zy 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4c5a4e692b93e76cf3dcb7245f92fa6a777a11853f533bce42660beb474f2509 3 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4c5a4e692b93e76cf3dcb7245f92fa6a777a11853f533bce42660beb474f2509 3 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4c5a4e692b93e76cf3dcb7245f92fa6a777a11853f533bce42660beb474f2509 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:19.665 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0Zy 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0Zy 00:12:19.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.0Zy 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67638 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67638 ']' 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.923 20:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67668 /var/tmp/host.sock 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67668 ']' 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.181 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Akr 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Akr 00:12:20.440 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Akr 00:12:21.008 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Bho ]] 00:12:21.008 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Bho 00:12:21.008 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.008 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.008 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.008 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Bho 00:12:21.008 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Bho 00:12:21.268 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:21.268 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xDg 00:12:21.268 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.268 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.268 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.268 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xDg 00:12:21.268 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xDg 00:12:21.527 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.zYK ]] 00:12:21.527 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zYK 00:12:21.527 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.527 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.527 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.527 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zYK 00:12:21.527 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zYK 00:12:21.786 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:21.786 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.IsR 00:12:21.786 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.787 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.787 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.787 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.IsR 00:12:21.787 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.IsR 00:12:22.050 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Zgq ]] 00:12:22.050 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zgq 00:12:22.050 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.050 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.050 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.050 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zgq 00:12:22.050 20:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zgq 00:12:22.309 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:22.309 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0Zy 00:12:22.309 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.309 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.309 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.309 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.0Zy 00:12:22.309 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.0Zy 00:12:22.571 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:22.571 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:22.571 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.571 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.571 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:22.571 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.832 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.090 00:12:23.090 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.090 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.090 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.350 { 00:12:23.350 "cntlid": 1, 00:12:23.350 "qid": 0, 00:12:23.350 "state": "enabled", 00:12:23.350 "thread": "nvmf_tgt_poll_group_000", 00:12:23.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:23.350 "listen_address": { 00:12:23.350 "trtype": "TCP", 00:12:23.350 "adrfam": "IPv4", 00:12:23.350 "traddr": "10.0.0.3", 00:12:23.350 "trsvcid": "4420" 00:12:23.350 }, 00:12:23.350 "peer_address": { 00:12:23.350 "trtype": "TCP", 00:12:23.350 "adrfam": "IPv4", 00:12:23.350 "traddr": "10.0.0.1", 00:12:23.350 "trsvcid": "60520" 00:12:23.350 }, 00:12:23.350 "auth": { 00:12:23.350 "state": "completed", 00:12:23.350 "digest": "sha256", 00:12:23.350 "dhgroup": "null" 00:12:23.350 } 00:12:23.350 } 00:12:23.350 ]' 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:23.350 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.608 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.608 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.608 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.867 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:23.867 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.068 20:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.326 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.584 00:12:28.843 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.843 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.843 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.101 { 00:12:29.101 "cntlid": 3, 00:12:29.101 "qid": 0, 00:12:29.101 "state": "enabled", 00:12:29.101 "thread": "nvmf_tgt_poll_group_000", 00:12:29.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:29.101 "listen_address": { 00:12:29.101 "trtype": "TCP", 00:12:29.101 "adrfam": "IPv4", 00:12:29.101 "traddr": "10.0.0.3", 00:12:29.101 "trsvcid": "4420" 00:12:29.101 }, 00:12:29.101 "peer_address": { 00:12:29.101 "trtype": "TCP", 00:12:29.101 "adrfam": "IPv4", 00:12:29.101 "traddr": "10.0.0.1", 00:12:29.101 "trsvcid": "46548" 00:12:29.101 }, 00:12:29.101 "auth": { 00:12:29.101 "state": "completed", 00:12:29.101 "digest": "sha256", 00:12:29.101 "dhgroup": "null" 00:12:29.101 } 00:12:29.101 } 00:12:29.101 ]' 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:29.101 20:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.101 20:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.101 20:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.101 20:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.360 20:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:29.360 20:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.297 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.555 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.163 00:12:31.163 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.163 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.163 20:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.421 { 00:12:31.421 "cntlid": 5, 00:12:31.421 "qid": 0, 00:12:31.421 "state": "enabled", 00:12:31.421 "thread": "nvmf_tgt_poll_group_000", 00:12:31.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:31.421 "listen_address": { 00:12:31.421 "trtype": "TCP", 00:12:31.421 "adrfam": "IPv4", 00:12:31.421 "traddr": "10.0.0.3", 00:12:31.421 "trsvcid": "4420" 00:12:31.421 }, 00:12:31.421 "peer_address": { 00:12:31.421 "trtype": "TCP", 00:12:31.421 "adrfam": "IPv4", 00:12:31.421 "traddr": "10.0.0.1", 00:12:31.421 "trsvcid": "46572" 00:12:31.421 }, 00:12:31.421 "auth": { 00:12:31.421 "state": "completed", 00:12:31.421 "digest": "sha256", 00:12:31.421 "dhgroup": "null" 00:12:31.421 } 00:12:31.421 } 00:12:31.421 ]' 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.421 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.988 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:31.988 20:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:32.554 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.812 20:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.378 00:12:33.378 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.378 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.378 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.636 { 00:12:33.636 "cntlid": 7, 00:12:33.636 "qid": 0, 00:12:33.636 "state": "enabled", 00:12:33.636 "thread": "nvmf_tgt_poll_group_000", 00:12:33.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:33.636 "listen_address": { 00:12:33.636 "trtype": "TCP", 00:12:33.636 "adrfam": "IPv4", 00:12:33.636 "traddr": "10.0.0.3", 00:12:33.636 "trsvcid": "4420" 00:12:33.636 }, 00:12:33.636 "peer_address": { 00:12:33.636 "trtype": "TCP", 00:12:33.636 "adrfam": "IPv4", 00:12:33.636 "traddr": "10.0.0.1", 00:12:33.636 "trsvcid": "46598" 00:12:33.636 }, 00:12:33.636 "auth": { 00:12:33.636 "state": "completed", 00:12:33.636 "digest": "sha256", 00:12:33.636 "dhgroup": "null" 00:12:33.636 } 00:12:33.636 } 00:12:33.636 ]' 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:33.636 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.893 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.893 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.893 20:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.152 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:12:34.152 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.087 20:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.087 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.344 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.344 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.344 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.344 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.601 00:12:35.601 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.601 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.601 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.859 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.859 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.859 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.859 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.859 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.859 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.859 { 00:12:35.859 "cntlid": 9, 00:12:35.859 "qid": 0, 00:12:35.859 "state": "enabled", 00:12:35.859 "thread": "nvmf_tgt_poll_group_000", 00:12:35.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:35.859 "listen_address": { 00:12:35.859 "trtype": "TCP", 00:12:35.859 "adrfam": "IPv4", 00:12:35.859 "traddr": "10.0.0.3", 00:12:35.859 "trsvcid": "4420" 00:12:35.859 }, 00:12:35.859 "peer_address": { 00:12:35.859 "trtype": "TCP", 00:12:35.859 "adrfam": "IPv4", 00:12:35.859 "traddr": "10.0.0.1", 00:12:35.859 "trsvcid": "46622" 00:12:35.859 }, 00:12:35.859 "auth": { 00:12:35.859 "state": "completed", 00:12:35.859 "digest": "sha256", 00:12:35.859 "dhgroup": "ffdhe2048" 00:12:35.859 } 00:12:35.859 } 00:12:35.859 ]' 00:12:35.859 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.117 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.117 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.117 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.117 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.117 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.117 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.117 20:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.375 20:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:36.375 20:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:37.312 20:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.312 20:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:37.312 20:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.312 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.312 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.312 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.312 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.312 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.571 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.830 00:12:37.830 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.830 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.830 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.398 { 00:12:38.398 "cntlid": 11, 00:12:38.398 "qid": 0, 00:12:38.398 "state": "enabled", 00:12:38.398 "thread": "nvmf_tgt_poll_group_000", 00:12:38.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:38.398 "listen_address": { 00:12:38.398 "trtype": "TCP", 00:12:38.398 "adrfam": "IPv4", 00:12:38.398 "traddr": "10.0.0.3", 00:12:38.398 "trsvcid": "4420" 00:12:38.398 }, 00:12:38.398 "peer_address": { 00:12:38.398 "trtype": "TCP", 00:12:38.398 "adrfam": "IPv4", 00:12:38.398 "traddr": "10.0.0.1", 00:12:38.398 "trsvcid": "60164" 00:12:38.398 }, 00:12:38.398 "auth": { 00:12:38.398 "state": "completed", 00:12:38.398 "digest": "sha256", 00:12:38.398 "dhgroup": "ffdhe2048" 00:12:38.398 } 00:12:38.398 } 00:12:38.398 ]' 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.398 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.656 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:38.657 20:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.592 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.852 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.110 00:12:40.110 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.110 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.110 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.370 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.370 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.370 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.370 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.370 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.370 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.370 { 00:12:40.370 "cntlid": 13, 00:12:40.370 "qid": 0, 00:12:40.370 "state": "enabled", 00:12:40.370 "thread": "nvmf_tgt_poll_group_000", 00:12:40.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:40.370 "listen_address": { 00:12:40.370 "trtype": "TCP", 00:12:40.370 "adrfam": "IPv4", 00:12:40.370 "traddr": "10.0.0.3", 00:12:40.370 "trsvcid": "4420" 00:12:40.370 }, 00:12:40.370 "peer_address": { 00:12:40.370 "trtype": "TCP", 00:12:40.370 "adrfam": "IPv4", 00:12:40.370 "traddr": "10.0.0.1", 00:12:40.370 "trsvcid": "60182" 00:12:40.370 }, 00:12:40.370 "auth": { 00:12:40.370 "state": "completed", 00:12:40.370 "digest": "sha256", 00:12:40.370 "dhgroup": "ffdhe2048" 00:12:40.370 } 00:12:40.370 } 00:12:40.370 ]' 00:12:40.370 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.628 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.628 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.628 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.628 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.628 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.628 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.628 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.888 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:40.888 20:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.824 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.392 00:12:42.392 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.392 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.392 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.650 { 00:12:42.650 "cntlid": 15, 00:12:42.650 "qid": 0, 00:12:42.650 "state": "enabled", 00:12:42.650 "thread": "nvmf_tgt_poll_group_000", 00:12:42.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:42.650 "listen_address": { 00:12:42.650 "trtype": "TCP", 00:12:42.650 "adrfam": "IPv4", 00:12:42.650 "traddr": "10.0.0.3", 00:12:42.650 "trsvcid": "4420" 00:12:42.650 }, 00:12:42.650 "peer_address": { 00:12:42.650 "trtype": "TCP", 00:12:42.650 "adrfam": "IPv4", 00:12:42.650 "traddr": "10.0.0.1", 00:12:42.650 "trsvcid": "60200" 00:12:42.650 }, 00:12:42.650 "auth": { 00:12:42.650 "state": "completed", 00:12:42.650 "digest": "sha256", 00:12:42.650 "dhgroup": "ffdhe2048" 00:12:42.650 } 00:12:42.650 } 00:12:42.650 ]' 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.650 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.217 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:12:43.217 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:43.836 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.094 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.661 00:12:44.661 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.661 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.661 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.920 { 00:12:44.920 "cntlid": 17, 00:12:44.920 "qid": 0, 00:12:44.920 "state": "enabled", 00:12:44.920 "thread": "nvmf_tgt_poll_group_000", 00:12:44.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:44.920 "listen_address": { 00:12:44.920 "trtype": "TCP", 00:12:44.920 "adrfam": "IPv4", 00:12:44.920 "traddr": "10.0.0.3", 00:12:44.920 "trsvcid": "4420" 00:12:44.920 }, 00:12:44.920 "peer_address": { 00:12:44.920 "trtype": "TCP", 00:12:44.920 "adrfam": "IPv4", 00:12:44.920 "traddr": "10.0.0.1", 00:12:44.920 "trsvcid": "60224" 00:12:44.920 }, 00:12:44.920 "auth": { 00:12:44.920 "state": "completed", 00:12:44.920 "digest": "sha256", 00:12:44.920 "dhgroup": "ffdhe3072" 00:12:44.920 } 00:12:44.920 } 00:12:44.920 ]' 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.920 20:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.179 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:45.179 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.117 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.375 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.634 00:12:46.634 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.634 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.634 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.893 { 00:12:46.893 "cntlid": 19, 00:12:46.893 "qid": 0, 00:12:46.893 "state": "enabled", 00:12:46.893 "thread": "nvmf_tgt_poll_group_000", 00:12:46.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:46.893 "listen_address": { 00:12:46.893 "trtype": "TCP", 00:12:46.893 "adrfam": "IPv4", 00:12:46.893 "traddr": "10.0.0.3", 00:12:46.893 "trsvcid": "4420" 00:12:46.893 }, 00:12:46.893 "peer_address": { 00:12:46.893 "trtype": "TCP", 00:12:46.893 "adrfam": "IPv4", 00:12:46.893 "traddr": "10.0.0.1", 00:12:46.893 "trsvcid": "60252" 00:12:46.893 }, 00:12:46.893 "auth": { 00:12:46.893 "state": "completed", 00:12:46.893 "digest": "sha256", 00:12:46.893 "dhgroup": "ffdhe3072" 00:12:46.893 } 00:12:46.893 } 00:12:46.893 ]' 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.893 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.151 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.151 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.151 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.151 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.151 20:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.410 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:47.410 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:47.976 20:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.240 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.858 00:12:48.858 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.858 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.858 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.116 { 00:12:49.116 "cntlid": 21, 00:12:49.116 "qid": 0, 00:12:49.116 "state": "enabled", 00:12:49.116 "thread": "nvmf_tgt_poll_group_000", 00:12:49.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:49.116 "listen_address": { 00:12:49.116 "trtype": "TCP", 00:12:49.116 "adrfam": "IPv4", 00:12:49.116 "traddr": "10.0.0.3", 00:12:49.116 "trsvcid": "4420" 00:12:49.116 }, 00:12:49.116 "peer_address": { 00:12:49.116 "trtype": "TCP", 00:12:49.116 "adrfam": "IPv4", 00:12:49.116 "traddr": "10.0.0.1", 00:12:49.116 "trsvcid": "59988" 00:12:49.116 }, 00:12:49.116 "auth": { 00:12:49.116 "state": "completed", 00:12:49.116 "digest": "sha256", 00:12:49.116 "dhgroup": "ffdhe3072" 00:12:49.116 } 00:12:49.116 } 00:12:49.116 ]' 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.116 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.116 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.116 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.116 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.116 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.117 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.683 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:49.683 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:50.250 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.250 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:50.250 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.251 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.251 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.251 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.251 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.251 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.816 20:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.382 00:12:51.382 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.382 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.382 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.642 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.642 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.642 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.642 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.642 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.642 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.642 { 00:12:51.642 "cntlid": 23, 00:12:51.642 "qid": 0, 00:12:51.642 "state": "enabled", 00:12:51.642 "thread": "nvmf_tgt_poll_group_000", 00:12:51.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:51.642 "listen_address": { 00:12:51.642 "trtype": "TCP", 00:12:51.642 "adrfam": "IPv4", 00:12:51.642 "traddr": "10.0.0.3", 00:12:51.642 "trsvcid": "4420" 00:12:51.642 }, 00:12:51.642 "peer_address": { 00:12:51.642 "trtype": "TCP", 00:12:51.642 "adrfam": "IPv4", 00:12:51.642 "traddr": "10.0.0.1", 00:12:51.642 "trsvcid": "60026" 00:12:51.642 }, 00:12:51.642 "auth": { 00:12:51.642 "state": "completed", 00:12:51.642 "digest": "sha256", 00:12:51.642 "dhgroup": "ffdhe3072" 00:12:51.642 } 00:12:51.642 } 00:12:51.642 ]' 00:12:51.642 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.901 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.901 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.901 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.901 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.901 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.901 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.901 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.160 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:12:52.160 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:53.108 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.368 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.627 00:12:53.627 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.627 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.627 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.885 { 00:12:53.885 "cntlid": 25, 00:12:53.885 "qid": 0, 00:12:53.885 "state": "enabled", 00:12:53.885 "thread": "nvmf_tgt_poll_group_000", 00:12:53.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:53.885 "listen_address": { 00:12:53.885 "trtype": "TCP", 00:12:53.885 "adrfam": "IPv4", 00:12:53.885 "traddr": "10.0.0.3", 00:12:53.885 "trsvcid": "4420" 00:12:53.885 }, 00:12:53.885 "peer_address": { 00:12:53.885 "trtype": "TCP", 00:12:53.885 "adrfam": "IPv4", 00:12:53.885 "traddr": "10.0.0.1", 00:12:53.885 "trsvcid": "60054" 00:12:53.885 }, 00:12:53.885 "auth": { 00:12:53.885 "state": "completed", 00:12:53.885 "digest": "sha256", 00:12:53.885 "dhgroup": "ffdhe4096" 00:12:53.885 } 00:12:53.885 } 00:12:53.885 ]' 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.885 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.143 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.143 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.143 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.143 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.143 20:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.401 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:54.401 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:55.335 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.594 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.853 00:12:55.853 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.853 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.853 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.418 { 00:12:56.418 "cntlid": 27, 00:12:56.418 "qid": 0, 00:12:56.418 "state": "enabled", 00:12:56.418 "thread": "nvmf_tgt_poll_group_000", 00:12:56.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:56.418 "listen_address": { 00:12:56.418 "trtype": "TCP", 00:12:56.418 "adrfam": "IPv4", 00:12:56.418 "traddr": "10.0.0.3", 00:12:56.418 "trsvcid": "4420" 00:12:56.418 }, 00:12:56.418 "peer_address": { 00:12:56.418 "trtype": "TCP", 00:12:56.418 "adrfam": "IPv4", 00:12:56.418 "traddr": "10.0.0.1", 00:12:56.418 "trsvcid": "60074" 00:12:56.418 }, 00:12:56.418 "auth": { 00:12:56.418 "state": "completed", 00:12:56.418 "digest": "sha256", 00:12:56.418 "dhgroup": "ffdhe4096" 00:12:56.418 } 00:12:56.418 } 00:12:56.418 ]' 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.418 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.985 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:56.985 20:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:57.551 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.810 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.375 00:12:58.375 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.375 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.375 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.634 { 00:12:58.634 "cntlid": 29, 00:12:58.634 "qid": 0, 00:12:58.634 "state": "enabled", 00:12:58.634 "thread": "nvmf_tgt_poll_group_000", 00:12:58.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:12:58.634 "listen_address": { 00:12:58.634 "trtype": "TCP", 00:12:58.634 "adrfam": "IPv4", 00:12:58.634 "traddr": "10.0.0.3", 00:12:58.634 "trsvcid": "4420" 00:12:58.634 }, 00:12:58.634 "peer_address": { 00:12:58.634 "trtype": "TCP", 00:12:58.634 "adrfam": "IPv4", 00:12:58.634 "traddr": "10.0.0.1", 00:12:58.634 "trsvcid": "50120" 00:12:58.634 }, 00:12:58.634 "auth": { 00:12:58.634 "state": "completed", 00:12:58.634 "digest": "sha256", 00:12:58.634 "dhgroup": "ffdhe4096" 00:12:58.634 } 00:12:58.634 } 00:12:58.634 ]' 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.634 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.891 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.891 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.891 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.891 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.149 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:59.149 20:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:59.716 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.974 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.540 00:13:00.540 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.540 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.540 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.799 { 00:13:00.799 "cntlid": 31, 00:13:00.799 "qid": 0, 00:13:00.799 "state": "enabled", 00:13:00.799 "thread": "nvmf_tgt_poll_group_000", 00:13:00.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:00.799 "listen_address": { 00:13:00.799 "trtype": "TCP", 00:13:00.799 "adrfam": "IPv4", 00:13:00.799 "traddr": "10.0.0.3", 00:13:00.799 "trsvcid": "4420" 00:13:00.799 }, 00:13:00.799 "peer_address": { 00:13:00.799 "trtype": "TCP", 00:13:00.799 "adrfam": "IPv4", 00:13:00.799 "traddr": "10.0.0.1", 00:13:00.799 "trsvcid": "50146" 00:13:00.799 }, 00:13:00.799 "auth": { 00:13:00.799 "state": "completed", 00:13:00.799 "digest": "sha256", 00:13:00.799 "dhgroup": "ffdhe4096" 00:13:00.799 } 00:13:00.799 } 00:13:00.799 ]' 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.799 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.058 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.058 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.058 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.316 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:01.316 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:01.883 20:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.141 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.707 00:13:02.707 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.707 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.707 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.965 { 00:13:02.965 "cntlid": 33, 00:13:02.965 "qid": 0, 00:13:02.965 "state": "enabled", 00:13:02.965 "thread": "nvmf_tgt_poll_group_000", 00:13:02.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:02.965 "listen_address": { 00:13:02.965 "trtype": "TCP", 00:13:02.965 "adrfam": "IPv4", 00:13:02.965 "traddr": "10.0.0.3", 00:13:02.965 "trsvcid": "4420" 00:13:02.965 }, 00:13:02.965 "peer_address": { 00:13:02.965 "trtype": "TCP", 00:13:02.965 "adrfam": "IPv4", 00:13:02.965 "traddr": "10.0.0.1", 00:13:02.965 "trsvcid": "50170" 00:13:02.965 }, 00:13:02.965 "auth": { 00:13:02.965 "state": "completed", 00:13:02.965 "digest": "sha256", 00:13:02.965 "dhgroup": "ffdhe6144" 00:13:02.965 } 00:13:02.965 } 00:13:02.965 ]' 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.965 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.223 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:03.223 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:03.789 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.789 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:03.789 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.789 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.047 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.047 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.047 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:04.047 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:04.047 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:04.047 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.047 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.047 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:04.047 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:04.047 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.048 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.048 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.048 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.048 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.048 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.048 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.048 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.612 00:13:04.612 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.612 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.612 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.870 { 00:13:04.870 "cntlid": 35, 00:13:04.870 "qid": 0, 00:13:04.870 "state": "enabled", 00:13:04.870 "thread": "nvmf_tgt_poll_group_000", 00:13:04.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:04.870 "listen_address": { 00:13:04.870 "trtype": "TCP", 00:13:04.870 "adrfam": "IPv4", 00:13:04.870 "traddr": "10.0.0.3", 00:13:04.870 "trsvcid": "4420" 00:13:04.870 }, 00:13:04.870 "peer_address": { 00:13:04.870 "trtype": "TCP", 00:13:04.870 "adrfam": "IPv4", 00:13:04.870 "traddr": "10.0.0.1", 00:13:04.870 "trsvcid": "50202" 00:13:04.870 }, 00:13:04.870 "auth": { 00:13:04.870 "state": "completed", 00:13:04.870 "digest": "sha256", 00:13:04.870 "dhgroup": "ffdhe6144" 00:13:04.870 } 00:13:04.870 } 00:13:04.870 ]' 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.870 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.128 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.128 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.128 20:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.128 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:05.128 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:06.062 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.319 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.885 00:13:06.885 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.885 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.885 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.143 { 00:13:07.143 "cntlid": 37, 00:13:07.143 "qid": 0, 00:13:07.143 "state": "enabled", 00:13:07.143 "thread": "nvmf_tgt_poll_group_000", 00:13:07.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:07.143 "listen_address": { 00:13:07.143 "trtype": "TCP", 00:13:07.143 "adrfam": "IPv4", 00:13:07.143 "traddr": "10.0.0.3", 00:13:07.143 "trsvcid": "4420" 00:13:07.143 }, 00:13:07.143 "peer_address": { 00:13:07.143 "trtype": "TCP", 00:13:07.143 "adrfam": "IPv4", 00:13:07.143 "traddr": "10.0.0.1", 00:13:07.143 "trsvcid": "50242" 00:13:07.143 }, 00:13:07.143 "auth": { 00:13:07.143 "state": "completed", 00:13:07.143 "digest": "sha256", 00:13:07.143 "dhgroup": "ffdhe6144" 00:13:07.143 } 00:13:07.143 } 00:13:07.143 ]' 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.143 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.143 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:07.143 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.143 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.143 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.143 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.400 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:07.400 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:07.966 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.227 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:08.227 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.227 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.227 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.227 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.227 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:08.227 20:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.227 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.800 00:13:08.800 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.800 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.800 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.058 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.058 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.058 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.058 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.058 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.058 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.058 { 00:13:09.058 "cntlid": 39, 00:13:09.059 "qid": 0, 00:13:09.059 "state": "enabled", 00:13:09.059 "thread": "nvmf_tgt_poll_group_000", 00:13:09.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:09.059 "listen_address": { 00:13:09.059 "trtype": "TCP", 00:13:09.059 "adrfam": "IPv4", 00:13:09.059 "traddr": "10.0.0.3", 00:13:09.059 "trsvcid": "4420" 00:13:09.059 }, 00:13:09.059 "peer_address": { 00:13:09.059 "trtype": "TCP", 00:13:09.059 "adrfam": "IPv4", 00:13:09.059 "traddr": "10.0.0.1", 00:13:09.059 "trsvcid": "45374" 00:13:09.059 }, 00:13:09.059 "auth": { 00:13:09.059 "state": "completed", 00:13:09.059 "digest": "sha256", 00:13:09.059 "dhgroup": "ffdhe6144" 00:13:09.059 } 00:13:09.059 } 00:13:09.059 ]' 00:13:09.059 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.059 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.059 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.317 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:09.317 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.317 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.317 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.317 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.576 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:09.576 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.510 20:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.443 00:13:11.443 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.443 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.443 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.701 { 00:13:11.701 "cntlid": 41, 00:13:11.701 "qid": 0, 00:13:11.701 "state": "enabled", 00:13:11.701 "thread": "nvmf_tgt_poll_group_000", 00:13:11.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:11.701 "listen_address": { 00:13:11.701 "trtype": "TCP", 00:13:11.701 "adrfam": "IPv4", 00:13:11.701 "traddr": "10.0.0.3", 00:13:11.701 "trsvcid": "4420" 00:13:11.701 }, 00:13:11.701 "peer_address": { 00:13:11.701 "trtype": "TCP", 00:13:11.701 "adrfam": "IPv4", 00:13:11.701 "traddr": "10.0.0.1", 00:13:11.701 "trsvcid": "45418" 00:13:11.701 }, 00:13:11.701 "auth": { 00:13:11.701 "state": "completed", 00:13:11.701 "digest": "sha256", 00:13:11.701 "dhgroup": "ffdhe8192" 00:13:11.701 } 00:13:11.701 } 00:13:11.701 ]' 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.701 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.267 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:12.267 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:13.201 20:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.458 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.459 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.459 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.459 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.459 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.024 00:13:14.024 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.024 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.024 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.591 { 00:13:14.591 "cntlid": 43, 00:13:14.591 "qid": 0, 00:13:14.591 "state": "enabled", 00:13:14.591 "thread": "nvmf_tgt_poll_group_000", 00:13:14.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:14.591 "listen_address": { 00:13:14.591 "trtype": "TCP", 00:13:14.591 "adrfam": "IPv4", 00:13:14.591 "traddr": "10.0.0.3", 00:13:14.591 "trsvcid": "4420" 00:13:14.591 }, 00:13:14.591 "peer_address": { 00:13:14.591 "trtype": "TCP", 00:13:14.591 "adrfam": "IPv4", 00:13:14.591 "traddr": "10.0.0.1", 00:13:14.591 "trsvcid": "45434" 00:13:14.591 }, 00:13:14.591 "auth": { 00:13:14.591 "state": "completed", 00:13:14.591 "digest": "sha256", 00:13:14.591 "dhgroup": "ffdhe8192" 00:13:14.591 } 00:13:14.591 } 00:13:14.591 ]' 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.591 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.156 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:15.156 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:15.721 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.287 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.854 00:13:16.854 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.854 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.854 20:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.421 { 00:13:17.421 "cntlid": 45, 00:13:17.421 "qid": 0, 00:13:17.421 "state": "enabled", 00:13:17.421 "thread": "nvmf_tgt_poll_group_000", 00:13:17.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:17.421 "listen_address": { 00:13:17.421 "trtype": "TCP", 00:13:17.421 "adrfam": "IPv4", 00:13:17.421 "traddr": "10.0.0.3", 00:13:17.421 "trsvcid": "4420" 00:13:17.421 }, 00:13:17.421 "peer_address": { 00:13:17.421 "trtype": "TCP", 00:13:17.421 "adrfam": "IPv4", 00:13:17.421 "traddr": "10.0.0.1", 00:13:17.421 "trsvcid": "45450" 00:13:17.421 }, 00:13:17.421 "auth": { 00:13:17.421 "state": "completed", 00:13:17.421 "digest": "sha256", 00:13:17.421 "dhgroup": "ffdhe8192" 00:13:17.421 } 00:13:17.421 } 00:13:17.421 ]' 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.421 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.678 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:17.678 20:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:18.612 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.869 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.806 00:13:19.806 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.806 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.806 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.075 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.075 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.075 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.075 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.075 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.075 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.075 { 00:13:20.075 "cntlid": 47, 00:13:20.075 "qid": 0, 00:13:20.075 "state": "enabled", 00:13:20.075 "thread": "nvmf_tgt_poll_group_000", 00:13:20.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:20.075 "listen_address": { 00:13:20.075 "trtype": "TCP", 00:13:20.075 "adrfam": "IPv4", 00:13:20.075 "traddr": "10.0.0.3", 00:13:20.075 "trsvcid": "4420" 00:13:20.075 }, 00:13:20.075 "peer_address": { 00:13:20.076 "trtype": "TCP", 00:13:20.076 "adrfam": "IPv4", 00:13:20.076 "traddr": "10.0.0.1", 00:13:20.076 "trsvcid": "56778" 00:13:20.076 }, 00:13:20.076 "auth": { 00:13:20.076 "state": "completed", 00:13:20.076 "digest": "sha256", 00:13:20.076 "dhgroup": "ffdhe8192" 00:13:20.076 } 00:13:20.076 } 00:13:20.076 ]' 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.076 20:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.641 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:20.641 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:21.208 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.776 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.035 00:13:22.035 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.035 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.035 20:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.293 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.293 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.293 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.293 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.293 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.293 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.293 { 00:13:22.293 "cntlid": 49, 00:13:22.293 "qid": 0, 00:13:22.293 "state": "enabled", 00:13:22.293 "thread": "nvmf_tgt_poll_group_000", 00:13:22.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:22.293 "listen_address": { 00:13:22.293 "trtype": "TCP", 00:13:22.293 "adrfam": "IPv4", 00:13:22.293 "traddr": "10.0.0.3", 00:13:22.293 "trsvcid": "4420" 00:13:22.293 }, 00:13:22.293 "peer_address": { 00:13:22.293 "trtype": "TCP", 00:13:22.293 "adrfam": "IPv4", 00:13:22.293 "traddr": "10.0.0.1", 00:13:22.293 "trsvcid": "56812" 00:13:22.293 }, 00:13:22.293 "auth": { 00:13:22.293 "state": "completed", 00:13:22.293 "digest": "sha384", 00:13:22.293 "dhgroup": "null" 00:13:22.293 } 00:13:22.293 } 00:13:22.293 ]' 00:13:22.293 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.551 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.551 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.551 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:22.551 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.551 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.551 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.551 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.809 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:22.809 20:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:23.743 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.001 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.283 00:13:24.283 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.283 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.283 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.544 { 00:13:24.544 "cntlid": 51, 00:13:24.544 "qid": 0, 00:13:24.544 "state": "enabled", 00:13:24.544 "thread": "nvmf_tgt_poll_group_000", 00:13:24.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:24.544 "listen_address": { 00:13:24.544 "trtype": "TCP", 00:13:24.544 "adrfam": "IPv4", 00:13:24.544 "traddr": "10.0.0.3", 00:13:24.544 "trsvcid": "4420" 00:13:24.544 }, 00:13:24.544 "peer_address": { 00:13:24.544 "trtype": "TCP", 00:13:24.544 "adrfam": "IPv4", 00:13:24.544 "traddr": "10.0.0.1", 00:13:24.544 "trsvcid": "56842" 00:13:24.544 }, 00:13:24.544 "auth": { 00:13:24.544 "state": "completed", 00:13:24.544 "digest": "sha384", 00:13:24.544 "dhgroup": "null" 00:13:24.544 } 00:13:24.544 } 00:13:24.544 ]' 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:24.544 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.801 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.801 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.801 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.058 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:25.058 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:25.624 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.882 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.447 00:13:26.447 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.447 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.447 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.704 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.704 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.704 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.704 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.704 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.704 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.704 { 00:13:26.704 "cntlid": 53, 00:13:26.704 "qid": 0, 00:13:26.704 "state": "enabled", 00:13:26.704 "thread": "nvmf_tgt_poll_group_000", 00:13:26.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:26.704 "listen_address": { 00:13:26.704 "trtype": "TCP", 00:13:26.704 "adrfam": "IPv4", 00:13:26.704 "traddr": "10.0.0.3", 00:13:26.704 "trsvcid": "4420" 00:13:26.704 }, 00:13:26.704 "peer_address": { 00:13:26.704 "trtype": "TCP", 00:13:26.705 "adrfam": "IPv4", 00:13:26.705 "traddr": "10.0.0.1", 00:13:26.705 "trsvcid": "56874" 00:13:26.705 }, 00:13:26.705 "auth": { 00:13:26.705 "state": "completed", 00:13:26.705 "digest": "sha384", 00:13:26.705 "dhgroup": "null" 00:13:26.705 } 00:13:26.705 } 00:13:26.705 ]' 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.705 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.962 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:26.962 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:27.897 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:28.156 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:28.414 00:13:28.414 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.414 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.414 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.690 { 00:13:28.690 "cntlid": 55, 00:13:28.690 "qid": 0, 00:13:28.690 "state": "enabled", 00:13:28.690 "thread": "nvmf_tgt_poll_group_000", 00:13:28.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:28.690 "listen_address": { 00:13:28.690 "trtype": "TCP", 00:13:28.690 "adrfam": "IPv4", 00:13:28.690 "traddr": "10.0.0.3", 00:13:28.690 "trsvcid": "4420" 00:13:28.690 }, 00:13:28.690 "peer_address": { 00:13:28.690 "trtype": "TCP", 00:13:28.690 "adrfam": "IPv4", 00:13:28.690 "traddr": "10.0.0.1", 00:13:28.690 "trsvcid": "56228" 00:13:28.690 }, 00:13:28.690 "auth": { 00:13:28.690 "state": "completed", 00:13:28.690 "digest": "sha384", 00:13:28.690 "dhgroup": "null" 00:13:28.690 } 00:13:28.690 } 00:13:28.690 ]' 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.690 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.977 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:28.977 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:29.911 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.169 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.735 00:13:30.735 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.735 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.735 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.993 { 00:13:30.993 "cntlid": 57, 00:13:30.993 "qid": 0, 00:13:30.993 "state": "enabled", 00:13:30.993 "thread": "nvmf_tgt_poll_group_000", 00:13:30.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:30.993 "listen_address": { 00:13:30.993 "trtype": "TCP", 00:13:30.993 "adrfam": "IPv4", 00:13:30.993 "traddr": "10.0.0.3", 00:13:30.993 "trsvcid": "4420" 00:13:30.993 }, 00:13:30.993 "peer_address": { 00:13:30.993 "trtype": "TCP", 00:13:30.993 "adrfam": "IPv4", 00:13:30.993 "traddr": "10.0.0.1", 00:13:30.993 "trsvcid": "56262" 00:13:30.993 }, 00:13:30.993 "auth": { 00:13:30.993 "state": "completed", 00:13:30.993 "digest": "sha384", 00:13:30.993 "dhgroup": "ffdhe2048" 00:13:30.993 } 00:13:30.993 } 00:13:30.993 ]' 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.993 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.252 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.252 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.510 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:31.510 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:32.446 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.446 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:32.446 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.446 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.447 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.014 00:13:33.014 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.014 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.014 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.322 { 00:13:33.322 "cntlid": 59, 00:13:33.322 "qid": 0, 00:13:33.322 "state": "enabled", 00:13:33.322 "thread": "nvmf_tgt_poll_group_000", 00:13:33.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:33.322 "listen_address": { 00:13:33.322 "trtype": "TCP", 00:13:33.322 "adrfam": "IPv4", 00:13:33.322 "traddr": "10.0.0.3", 00:13:33.322 "trsvcid": "4420" 00:13:33.322 }, 00:13:33.322 "peer_address": { 00:13:33.322 "trtype": "TCP", 00:13:33.322 "adrfam": "IPv4", 00:13:33.322 "traddr": "10.0.0.1", 00:13:33.322 "trsvcid": "56300" 00:13:33.322 }, 00:13:33.322 "auth": { 00:13:33.322 "state": "completed", 00:13:33.322 "digest": "sha384", 00:13:33.322 "dhgroup": "ffdhe2048" 00:13:33.322 } 00:13:33.322 } 00:13:33.322 ]' 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.322 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.589 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.589 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.589 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.848 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:33.848 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:34.412 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.670 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.928 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.928 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.928 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.928 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.187 00:13:35.187 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.187 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.187 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.755 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.755 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.755 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.755 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.755 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.755 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.755 { 00:13:35.755 "cntlid": 61, 00:13:35.755 "qid": 0, 00:13:35.755 "state": "enabled", 00:13:35.755 "thread": "nvmf_tgt_poll_group_000", 00:13:35.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:35.755 "listen_address": { 00:13:35.756 "trtype": "TCP", 00:13:35.756 "adrfam": "IPv4", 00:13:35.756 "traddr": "10.0.0.3", 00:13:35.756 "trsvcid": "4420" 00:13:35.756 }, 00:13:35.756 "peer_address": { 00:13:35.756 "trtype": "TCP", 00:13:35.756 "adrfam": "IPv4", 00:13:35.756 "traddr": "10.0.0.1", 00:13:35.756 "trsvcid": "56332" 00:13:35.756 }, 00:13:35.756 "auth": { 00:13:35.756 "state": "completed", 00:13:35.756 "digest": "sha384", 00:13:35.756 "dhgroup": "ffdhe2048" 00:13:35.756 } 00:13:35.756 } 00:13:35.756 ]' 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.756 20:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.323 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:36.323 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:36.890 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.456 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.457 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.457 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.714 00:13:37.714 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.714 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.714 20:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.281 { 00:13:38.281 "cntlid": 63, 00:13:38.281 "qid": 0, 00:13:38.281 "state": "enabled", 00:13:38.281 "thread": "nvmf_tgt_poll_group_000", 00:13:38.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:38.281 "listen_address": { 00:13:38.281 "trtype": "TCP", 00:13:38.281 "adrfam": "IPv4", 00:13:38.281 "traddr": "10.0.0.3", 00:13:38.281 "trsvcid": "4420" 00:13:38.281 }, 00:13:38.281 "peer_address": { 00:13:38.281 "trtype": "TCP", 00:13:38.281 "adrfam": "IPv4", 00:13:38.281 "traddr": "10.0.0.1", 00:13:38.281 "trsvcid": "55816" 00:13:38.281 }, 00:13:38.281 "auth": { 00:13:38.281 "state": "completed", 00:13:38.281 "digest": "sha384", 00:13:38.281 "dhgroup": "ffdhe2048" 00:13:38.281 } 00:13:38.281 } 00:13:38.281 ]' 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.281 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.848 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:38.848 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:39.414 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.415 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:39.415 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.415 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.672 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.672 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:39.672 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.672 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:39.673 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.932 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.190 00:13:40.190 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.190 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.190 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.758 { 00:13:40.758 "cntlid": 65, 00:13:40.758 "qid": 0, 00:13:40.758 "state": "enabled", 00:13:40.758 "thread": "nvmf_tgt_poll_group_000", 00:13:40.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:40.758 "listen_address": { 00:13:40.758 "trtype": "TCP", 00:13:40.758 "adrfam": "IPv4", 00:13:40.758 "traddr": "10.0.0.3", 00:13:40.758 "trsvcid": "4420" 00:13:40.758 }, 00:13:40.758 "peer_address": { 00:13:40.758 "trtype": "TCP", 00:13:40.758 "adrfam": "IPv4", 00:13:40.758 "traddr": "10.0.0.1", 00:13:40.758 "trsvcid": "55822" 00:13:40.758 }, 00:13:40.758 "auth": { 00:13:40.758 "state": "completed", 00:13:40.758 "digest": "sha384", 00:13:40.758 "dhgroup": "ffdhe3072" 00:13:40.758 } 00:13:40.758 } 00:13:40.758 ]' 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.758 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.016 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:41.016 20:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.952 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.953 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.521 00:13:42.521 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.521 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.521 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.780 { 00:13:42.780 "cntlid": 67, 00:13:42.780 "qid": 0, 00:13:42.780 "state": "enabled", 00:13:42.780 "thread": "nvmf_tgt_poll_group_000", 00:13:42.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:42.780 "listen_address": { 00:13:42.780 "trtype": "TCP", 00:13:42.780 "adrfam": "IPv4", 00:13:42.780 "traddr": "10.0.0.3", 00:13:42.780 "trsvcid": "4420" 00:13:42.780 }, 00:13:42.780 "peer_address": { 00:13:42.780 "trtype": "TCP", 00:13:42.780 "adrfam": "IPv4", 00:13:42.780 "traddr": "10.0.0.1", 00:13:42.780 "trsvcid": "55842" 00:13:42.780 }, 00:13:42.780 "auth": { 00:13:42.780 "state": "completed", 00:13:42.780 "digest": "sha384", 00:13:42.780 "dhgroup": "ffdhe3072" 00:13:42.780 } 00:13:42.780 } 00:13:42.780 ]' 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:42.780 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.038 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.038 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.038 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.296 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:43.296 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:44.229 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.230 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:44.230 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.230 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.230 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.230 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.230 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:44.230 20:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.230 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.488 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.488 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.488 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.488 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.747 00:13:44.747 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.747 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.747 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.006 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.007 { 00:13:45.007 "cntlid": 69, 00:13:45.007 "qid": 0, 00:13:45.007 "state": "enabled", 00:13:45.007 "thread": "nvmf_tgt_poll_group_000", 00:13:45.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:45.007 "listen_address": { 00:13:45.007 "trtype": "TCP", 00:13:45.007 "adrfam": "IPv4", 00:13:45.007 "traddr": "10.0.0.3", 00:13:45.007 "trsvcid": "4420" 00:13:45.007 }, 00:13:45.007 "peer_address": { 00:13:45.007 "trtype": "TCP", 00:13:45.007 "adrfam": "IPv4", 00:13:45.007 "traddr": "10.0.0.1", 00:13:45.007 "trsvcid": "55878" 00:13:45.007 }, 00:13:45.007 "auth": { 00:13:45.007 "state": "completed", 00:13:45.007 "digest": "sha384", 00:13:45.007 "dhgroup": "ffdhe3072" 00:13:45.007 } 00:13:45.007 } 00:13:45.007 ]' 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.007 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.265 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:45.265 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.265 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.265 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.265 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.524 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:45.524 20:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:46.459 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.717 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.976 00:13:46.976 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.976 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.976 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.235 { 00:13:47.235 "cntlid": 71, 00:13:47.235 "qid": 0, 00:13:47.235 "state": "enabled", 00:13:47.235 "thread": "nvmf_tgt_poll_group_000", 00:13:47.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:47.235 "listen_address": { 00:13:47.235 "trtype": "TCP", 00:13:47.235 "adrfam": "IPv4", 00:13:47.235 "traddr": "10.0.0.3", 00:13:47.235 "trsvcid": "4420" 00:13:47.235 }, 00:13:47.235 "peer_address": { 00:13:47.235 "trtype": "TCP", 00:13:47.235 "adrfam": "IPv4", 00:13:47.235 "traddr": "10.0.0.1", 00:13:47.235 "trsvcid": "55910" 00:13:47.235 }, 00:13:47.235 "auth": { 00:13:47.235 "state": "completed", 00:13:47.235 "digest": "sha384", 00:13:47.235 "dhgroup": "ffdhe3072" 00:13:47.235 } 00:13:47.235 } 00:13:47.235 ]' 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.235 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.236 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:47.236 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.494 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.494 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.494 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.753 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:47.753 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:48.320 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.578 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.836 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.836 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.836 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.836 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.138 00:13:49.138 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.138 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.138 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.395 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.395 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.395 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.395 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.395 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.395 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.395 { 00:13:49.395 "cntlid": 73, 00:13:49.395 "qid": 0, 00:13:49.395 "state": "enabled", 00:13:49.395 "thread": "nvmf_tgt_poll_group_000", 00:13:49.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:49.395 "listen_address": { 00:13:49.395 "trtype": "TCP", 00:13:49.395 "adrfam": "IPv4", 00:13:49.395 "traddr": "10.0.0.3", 00:13:49.395 "trsvcid": "4420" 00:13:49.395 }, 00:13:49.395 "peer_address": { 00:13:49.395 "trtype": "TCP", 00:13:49.395 "adrfam": "IPv4", 00:13:49.395 "traddr": "10.0.0.1", 00:13:49.395 "trsvcid": "36006" 00:13:49.395 }, 00:13:49.395 "auth": { 00:13:49.395 "state": "completed", 00:13:49.395 "digest": "sha384", 00:13:49.395 "dhgroup": "ffdhe4096" 00:13:49.395 } 00:13:49.395 } 00:13:49.395 ]' 00:13:49.395 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.652 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.652 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.652 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.652 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.652 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.652 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.652 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.910 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:49.910 20:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:50.843 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.101 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.667 00:13:51.667 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.667 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.667 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.925 { 00:13:51.925 "cntlid": 75, 00:13:51.925 "qid": 0, 00:13:51.925 "state": "enabled", 00:13:51.925 "thread": "nvmf_tgt_poll_group_000", 00:13:51.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:51.925 "listen_address": { 00:13:51.925 "trtype": "TCP", 00:13:51.925 "adrfam": "IPv4", 00:13:51.925 "traddr": "10.0.0.3", 00:13:51.925 "trsvcid": "4420" 00:13:51.925 }, 00:13:51.925 "peer_address": { 00:13:51.925 "trtype": "TCP", 00:13:51.925 "adrfam": "IPv4", 00:13:51.925 "traddr": "10.0.0.1", 00:13:51.925 "trsvcid": "36042" 00:13:51.925 }, 00:13:51.925 "auth": { 00:13:51.925 "state": "completed", 00:13:51.925 "digest": "sha384", 00:13:51.925 "dhgroup": "ffdhe4096" 00:13:51.925 } 00:13:51.925 } 00:13:51.925 ]' 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.925 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.183 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.183 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.183 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.183 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.183 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.440 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:52.440 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:53.013 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.271 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.872 00:13:53.872 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.872 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.872 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.130 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.130 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.130 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.130 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.130 20:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.130 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.130 { 00:13:54.130 "cntlid": 77, 00:13:54.130 "qid": 0, 00:13:54.130 "state": "enabled", 00:13:54.130 "thread": "nvmf_tgt_poll_group_000", 00:13:54.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:54.130 "listen_address": { 00:13:54.130 "trtype": "TCP", 00:13:54.130 "adrfam": "IPv4", 00:13:54.130 "traddr": "10.0.0.3", 00:13:54.130 "trsvcid": "4420" 00:13:54.130 }, 00:13:54.130 "peer_address": { 00:13:54.130 "trtype": "TCP", 00:13:54.130 "adrfam": "IPv4", 00:13:54.130 "traddr": "10.0.0.1", 00:13:54.130 "trsvcid": "36078" 00:13:54.130 }, 00:13:54.130 "auth": { 00:13:54.130 "state": "completed", 00:13:54.130 "digest": "sha384", 00:13:54.130 "dhgroup": "ffdhe4096" 00:13:54.130 } 00:13:54.130 } 00:13:54.130 ]' 00:13:54.130 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.130 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.130 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.130 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:54.130 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.388 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.388 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.388 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.645 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:54.645 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.576 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.834 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.834 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:55.834 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.834 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.091 00:13:56.091 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.091 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.091 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.657 { 00:13:56.657 "cntlid": 79, 00:13:56.657 "qid": 0, 00:13:56.657 "state": "enabled", 00:13:56.657 "thread": "nvmf_tgt_poll_group_000", 00:13:56.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:56.657 "listen_address": { 00:13:56.657 "trtype": "TCP", 00:13:56.657 "adrfam": "IPv4", 00:13:56.657 "traddr": "10.0.0.3", 00:13:56.657 "trsvcid": "4420" 00:13:56.657 }, 00:13:56.657 "peer_address": { 00:13:56.657 "trtype": "TCP", 00:13:56.657 "adrfam": "IPv4", 00:13:56.657 "traddr": "10.0.0.1", 00:13:56.657 "trsvcid": "36106" 00:13:56.657 }, 00:13:56.657 "auth": { 00:13:56.657 "state": "completed", 00:13:56.657 "digest": "sha384", 00:13:56.657 "dhgroup": "ffdhe4096" 00:13:56.657 } 00:13:56.657 } 00:13:56.657 ]' 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:56.657 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.914 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.914 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.914 20:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.170 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:57.170 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:13:58.100 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.100 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:13:58.100 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.100 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.100 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.100 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.100 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.100 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:58.100 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.664 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.228 00:13:59.228 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.228 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.229 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.486 { 00:13:59.486 "cntlid": 81, 00:13:59.486 "qid": 0, 00:13:59.486 "state": "enabled", 00:13:59.486 "thread": "nvmf_tgt_poll_group_000", 00:13:59.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:13:59.486 "listen_address": { 00:13:59.486 "trtype": "TCP", 00:13:59.486 "adrfam": "IPv4", 00:13:59.486 "traddr": "10.0.0.3", 00:13:59.486 "trsvcid": "4420" 00:13:59.486 }, 00:13:59.486 "peer_address": { 00:13:59.486 "trtype": "TCP", 00:13:59.486 "adrfam": "IPv4", 00:13:59.486 "traddr": "10.0.0.1", 00:13:59.486 "trsvcid": "56024" 00:13:59.486 }, 00:13:59.486 "auth": { 00:13:59.486 "state": "completed", 00:13:59.486 "digest": "sha384", 00:13:59.486 "dhgroup": "ffdhe6144" 00:13:59.486 } 00:13:59.486 } 00:13:59.486 ]' 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.486 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.487 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.487 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.487 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.487 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.487 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.487 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.745 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:13:59.745 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:00.311 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.569 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.136 00:14:01.136 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.136 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.136 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.395 { 00:14:01.395 "cntlid": 83, 00:14:01.395 "qid": 0, 00:14:01.395 "state": "enabled", 00:14:01.395 "thread": "nvmf_tgt_poll_group_000", 00:14:01.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:01.395 "listen_address": { 00:14:01.395 "trtype": "TCP", 00:14:01.395 "adrfam": "IPv4", 00:14:01.395 "traddr": "10.0.0.3", 00:14:01.395 "trsvcid": "4420" 00:14:01.395 }, 00:14:01.395 "peer_address": { 00:14:01.395 "trtype": "TCP", 00:14:01.395 "adrfam": "IPv4", 00:14:01.395 "traddr": "10.0.0.1", 00:14:01.395 "trsvcid": "56042" 00:14:01.395 }, 00:14:01.395 "auth": { 00:14:01.395 "state": "completed", 00:14:01.395 "digest": "sha384", 00:14:01.395 "dhgroup": "ffdhe6144" 00:14:01.395 } 00:14:01.395 } 00:14:01.395 ]' 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.395 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.653 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:01.653 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:02.220 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.220 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:02.220 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.220 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.478 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.478 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.478 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:02.478 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:02.736 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.737 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.304 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.304 { 00:14:03.304 "cntlid": 85, 00:14:03.304 "qid": 0, 00:14:03.304 "state": "enabled", 00:14:03.304 "thread": "nvmf_tgt_poll_group_000", 00:14:03.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:03.304 "listen_address": { 00:14:03.304 "trtype": "TCP", 00:14:03.304 "adrfam": "IPv4", 00:14:03.304 "traddr": "10.0.0.3", 00:14:03.304 "trsvcid": "4420" 00:14:03.304 }, 00:14:03.304 "peer_address": { 00:14:03.304 "trtype": "TCP", 00:14:03.304 "adrfam": "IPv4", 00:14:03.304 "traddr": "10.0.0.1", 00:14:03.304 "trsvcid": "56068" 00:14:03.304 }, 00:14:03.304 "auth": { 00:14:03.304 "state": "completed", 00:14:03.304 "digest": "sha384", 00:14:03.304 "dhgroup": "ffdhe6144" 00:14:03.304 } 00:14:03.304 } 00:14:03.304 ]' 00:14:03.304 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.562 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.562 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.562 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:03.562 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.562 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.562 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.562 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.820 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:03.820 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:04.386 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.644 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:04.644 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.644 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.644 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.644 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.644 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:04.644 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.902 20:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.469 00:14:05.469 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.469 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.469 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.726 { 00:14:05.726 "cntlid": 87, 00:14:05.726 "qid": 0, 00:14:05.726 "state": "enabled", 00:14:05.726 "thread": "nvmf_tgt_poll_group_000", 00:14:05.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:05.726 "listen_address": { 00:14:05.726 "trtype": "TCP", 00:14:05.726 "adrfam": "IPv4", 00:14:05.726 "traddr": "10.0.0.3", 00:14:05.726 "trsvcid": "4420" 00:14:05.726 }, 00:14:05.726 "peer_address": { 00:14:05.726 "trtype": "TCP", 00:14:05.726 "adrfam": "IPv4", 00:14:05.726 "traddr": "10.0.0.1", 00:14:05.726 "trsvcid": "56090" 00:14:05.726 }, 00:14:05.726 "auth": { 00:14:05.726 "state": "completed", 00:14:05.726 "digest": "sha384", 00:14:05.726 "dhgroup": "ffdhe6144" 00:14:05.726 } 00:14:05.726 } 00:14:05.726 ]' 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.726 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.984 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:05.984 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:06.917 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.176 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.750 00:14:07.750 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.750 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.750 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.008 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.008 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.008 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.008 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.008 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.266 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.266 { 00:14:08.266 "cntlid": 89, 00:14:08.266 "qid": 0, 00:14:08.266 "state": "enabled", 00:14:08.266 "thread": "nvmf_tgt_poll_group_000", 00:14:08.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:08.266 "listen_address": { 00:14:08.266 "trtype": "TCP", 00:14:08.266 "adrfam": "IPv4", 00:14:08.266 "traddr": "10.0.0.3", 00:14:08.266 "trsvcid": "4420" 00:14:08.266 }, 00:14:08.266 "peer_address": { 00:14:08.266 "trtype": "TCP", 00:14:08.266 "adrfam": "IPv4", 00:14:08.266 "traddr": "10.0.0.1", 00:14:08.266 "trsvcid": "52316" 00:14:08.266 }, 00:14:08.266 "auth": { 00:14:08.266 "state": "completed", 00:14:08.266 "digest": "sha384", 00:14:08.266 "dhgroup": "ffdhe8192" 00:14:08.266 } 00:14:08.266 } 00:14:08.266 ]' 00:14:08.266 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.266 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.266 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.266 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:08.266 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.266 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.266 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.266 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.524 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:08.524 20:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:09.459 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.718 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.284 00:14:10.284 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.284 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.284 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.543 { 00:14:10.543 "cntlid": 91, 00:14:10.543 "qid": 0, 00:14:10.543 "state": "enabled", 00:14:10.543 "thread": "nvmf_tgt_poll_group_000", 00:14:10.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:10.543 "listen_address": { 00:14:10.543 "trtype": "TCP", 00:14:10.543 "adrfam": "IPv4", 00:14:10.543 "traddr": "10.0.0.3", 00:14:10.543 "trsvcid": "4420" 00:14:10.543 }, 00:14:10.543 "peer_address": { 00:14:10.543 "trtype": "TCP", 00:14:10.543 "adrfam": "IPv4", 00:14:10.543 "traddr": "10.0.0.1", 00:14:10.543 "trsvcid": "52340" 00:14:10.543 }, 00:14:10.543 "auth": { 00:14:10.543 "state": "completed", 00:14:10.543 "digest": "sha384", 00:14:10.543 "dhgroup": "ffdhe8192" 00:14:10.543 } 00:14:10.543 } 00:14:10.543 ]' 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.543 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.802 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:10.802 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.802 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.802 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.802 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.064 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:11.064 20:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:11.632 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.891 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.457 00:14:12.457 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.457 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.457 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.804 { 00:14:12.804 "cntlid": 93, 00:14:12.804 "qid": 0, 00:14:12.804 "state": "enabled", 00:14:12.804 "thread": "nvmf_tgt_poll_group_000", 00:14:12.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:12.804 "listen_address": { 00:14:12.804 "trtype": "TCP", 00:14:12.804 "adrfam": "IPv4", 00:14:12.804 "traddr": "10.0.0.3", 00:14:12.804 "trsvcid": "4420" 00:14:12.804 }, 00:14:12.804 "peer_address": { 00:14:12.804 "trtype": "TCP", 00:14:12.804 "adrfam": "IPv4", 00:14:12.804 "traddr": "10.0.0.1", 00:14:12.804 "trsvcid": "52372" 00:14:12.804 }, 00:14:12.804 "auth": { 00:14:12.804 "state": "completed", 00:14:12.804 "digest": "sha384", 00:14:12.804 "dhgroup": "ffdhe8192" 00:14:12.804 } 00:14:12.804 } 00:14:12.804 ]' 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.804 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.062 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:13.062 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:13.636 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:13.894 20:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.461 00:14:14.461 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.461 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.461 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.718 { 00:14:14.718 "cntlid": 95, 00:14:14.718 "qid": 0, 00:14:14.718 "state": "enabled", 00:14:14.718 "thread": "nvmf_tgt_poll_group_000", 00:14:14.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:14.718 "listen_address": { 00:14:14.718 "trtype": "TCP", 00:14:14.718 "adrfam": "IPv4", 00:14:14.718 "traddr": "10.0.0.3", 00:14:14.718 "trsvcid": "4420" 00:14:14.718 }, 00:14:14.718 "peer_address": { 00:14:14.718 "trtype": "TCP", 00:14:14.718 "adrfam": "IPv4", 00:14:14.718 "traddr": "10.0.0.1", 00:14:14.718 "trsvcid": "52406" 00:14:14.718 }, 00:14:14.718 "auth": { 00:14:14.718 "state": "completed", 00:14:14.718 "digest": "sha384", 00:14:14.718 "dhgroup": "ffdhe8192" 00:14:14.718 } 00:14:14.718 } 00:14:14.718 ]' 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.718 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.974 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.974 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.974 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.974 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.974 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.231 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:15.231 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:15.801 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.801 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:15.801 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.801 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.059 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.059 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:16.059 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.059 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.059 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:16.059 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.318 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.576 00:14:16.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.576 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.834 { 00:14:16.834 "cntlid": 97, 00:14:16.834 "qid": 0, 00:14:16.834 "state": "enabled", 00:14:16.834 "thread": "nvmf_tgt_poll_group_000", 00:14:16.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:16.834 "listen_address": { 00:14:16.834 "trtype": "TCP", 00:14:16.834 "adrfam": "IPv4", 00:14:16.834 "traddr": "10.0.0.3", 00:14:16.834 "trsvcid": "4420" 00:14:16.834 }, 00:14:16.834 "peer_address": { 00:14:16.834 "trtype": "TCP", 00:14:16.834 "adrfam": "IPv4", 00:14:16.834 "traddr": "10.0.0.1", 00:14:16.834 "trsvcid": "52434" 00:14:16.834 }, 00:14:16.834 "auth": { 00:14:16.834 "state": "completed", 00:14:16.834 "digest": "sha512", 00:14:16.834 "dhgroup": "null" 00:14:16.834 } 00:14:16.834 } 00:14:16.834 ]' 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:16.834 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.093 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.093 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.093 20:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.093 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:17.093 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.025 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.026 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.026 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.589 00:14:18.589 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.589 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.589 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.848 { 00:14:18.848 "cntlid": 99, 00:14:18.848 "qid": 0, 00:14:18.848 "state": "enabled", 00:14:18.848 "thread": "nvmf_tgt_poll_group_000", 00:14:18.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:18.848 "listen_address": { 00:14:18.848 "trtype": "TCP", 00:14:18.848 "adrfam": "IPv4", 00:14:18.848 "traddr": "10.0.0.3", 00:14:18.848 "trsvcid": "4420" 00:14:18.848 }, 00:14:18.848 "peer_address": { 00:14:18.848 "trtype": "TCP", 00:14:18.848 "adrfam": "IPv4", 00:14:18.848 "traddr": "10.0.0.1", 00:14:18.848 "trsvcid": "47802" 00:14:18.848 }, 00:14:18.848 "auth": { 00:14:18.848 "state": "completed", 00:14:18.848 "digest": "sha512", 00:14:18.848 "dhgroup": "null" 00:14:18.848 } 00:14:18.848 } 00:14:18.848 ]' 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.848 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.105 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:19.105 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:19.669 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.927 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.492 00:14:20.492 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.492 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.492 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.751 { 00:14:20.751 "cntlid": 101, 00:14:20.751 "qid": 0, 00:14:20.751 "state": "enabled", 00:14:20.751 "thread": "nvmf_tgt_poll_group_000", 00:14:20.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:20.751 "listen_address": { 00:14:20.751 "trtype": "TCP", 00:14:20.751 "adrfam": "IPv4", 00:14:20.751 "traddr": "10.0.0.3", 00:14:20.751 "trsvcid": "4420" 00:14:20.751 }, 00:14:20.751 "peer_address": { 00:14:20.751 "trtype": "TCP", 00:14:20.751 "adrfam": "IPv4", 00:14:20.751 "traddr": "10.0.0.1", 00:14:20.751 "trsvcid": "47828" 00:14:20.751 }, 00:14:20.751 "auth": { 00:14:20.751 "state": "completed", 00:14:20.751 "digest": "sha512", 00:14:20.751 "dhgroup": "null" 00:14:20.751 } 00:14:20.751 } 00:14:20.751 ]' 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.751 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.010 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:21.010 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:21.575 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:21.935 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.936 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.258 00:14:22.258 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.258 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.258 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.518 { 00:14:22.518 "cntlid": 103, 00:14:22.518 "qid": 0, 00:14:22.518 "state": "enabled", 00:14:22.518 "thread": "nvmf_tgt_poll_group_000", 00:14:22.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:22.518 "listen_address": { 00:14:22.518 "trtype": "TCP", 00:14:22.518 "adrfam": "IPv4", 00:14:22.518 "traddr": "10.0.0.3", 00:14:22.518 "trsvcid": "4420" 00:14:22.518 }, 00:14:22.518 "peer_address": { 00:14:22.518 "trtype": "TCP", 00:14:22.518 "adrfam": "IPv4", 00:14:22.518 "traddr": "10.0.0.1", 00:14:22.518 "trsvcid": "47850" 00:14:22.518 }, 00:14:22.518 "auth": { 00:14:22.518 "state": "completed", 00:14:22.518 "digest": "sha512", 00:14:22.518 "dhgroup": "null" 00:14:22.518 } 00:14:22.518 } 00:14:22.518 ]' 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.518 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.776 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:22.776 20:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:23.340 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.599 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.164 00:14:24.164 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.164 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.164 20:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.428 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.428 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.428 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.428 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.428 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.428 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.428 { 00:14:24.428 "cntlid": 105, 00:14:24.428 "qid": 0, 00:14:24.428 "state": "enabled", 00:14:24.428 "thread": "nvmf_tgt_poll_group_000", 00:14:24.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:24.428 "listen_address": { 00:14:24.428 "trtype": "TCP", 00:14:24.428 "adrfam": "IPv4", 00:14:24.428 "traddr": "10.0.0.3", 00:14:24.428 "trsvcid": "4420" 00:14:24.428 }, 00:14:24.428 "peer_address": { 00:14:24.428 "trtype": "TCP", 00:14:24.428 "adrfam": "IPv4", 00:14:24.428 "traddr": "10.0.0.1", 00:14:24.428 "trsvcid": "47876" 00:14:24.428 }, 00:14:24.428 "auth": { 00:14:24.428 "state": "completed", 00:14:24.428 "digest": "sha512", 00:14:24.428 "dhgroup": "ffdhe2048" 00:14:24.428 } 00:14:24.428 } 00:14:24.428 ]' 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.429 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.687 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:24.687 20:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:25.622 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.880 20:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.138 00:14:26.138 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.138 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.138 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.395 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.395 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.395 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.395 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.395 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.396 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.396 { 00:14:26.396 "cntlid": 107, 00:14:26.396 "qid": 0, 00:14:26.396 "state": "enabled", 00:14:26.396 "thread": "nvmf_tgt_poll_group_000", 00:14:26.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:26.396 "listen_address": { 00:14:26.396 "trtype": "TCP", 00:14:26.396 "adrfam": "IPv4", 00:14:26.396 "traddr": "10.0.0.3", 00:14:26.396 "trsvcid": "4420" 00:14:26.396 }, 00:14:26.396 "peer_address": { 00:14:26.396 "trtype": "TCP", 00:14:26.396 "adrfam": "IPv4", 00:14:26.396 "traddr": "10.0.0.1", 00:14:26.396 "trsvcid": "47894" 00:14:26.396 }, 00:14:26.396 "auth": { 00:14:26.396 "state": "completed", 00:14:26.396 "digest": "sha512", 00:14:26.396 "dhgroup": "ffdhe2048" 00:14:26.396 } 00:14:26.396 } 00:14:26.396 ]' 00:14:26.396 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.396 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.396 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.654 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.654 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.654 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.654 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.654 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.912 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:26.912 20:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:27.480 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.481 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:27.481 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.481 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.740 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.740 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.740 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:27.740 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:27.998 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.999 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.257 00:14:28.257 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.258 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.258 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.517 { 00:14:28.517 "cntlid": 109, 00:14:28.517 "qid": 0, 00:14:28.517 "state": "enabled", 00:14:28.517 "thread": "nvmf_tgt_poll_group_000", 00:14:28.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:28.517 "listen_address": { 00:14:28.517 "trtype": "TCP", 00:14:28.517 "adrfam": "IPv4", 00:14:28.517 "traddr": "10.0.0.3", 00:14:28.517 "trsvcid": "4420" 00:14:28.517 }, 00:14:28.517 "peer_address": { 00:14:28.517 "trtype": "TCP", 00:14:28.517 "adrfam": "IPv4", 00:14:28.517 "traddr": "10.0.0.1", 00:14:28.517 "trsvcid": "56298" 00:14:28.517 }, 00:14:28.517 "auth": { 00:14:28.517 "state": "completed", 00:14:28.517 "digest": "sha512", 00:14:28.517 "dhgroup": "ffdhe2048" 00:14:28.517 } 00:14:28.517 } 00:14:28.517 ]' 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.517 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.776 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.776 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.776 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.776 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.776 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.035 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:29.035 20:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:29.602 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.860 20:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.118 00:14:30.377 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.377 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.377 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.634 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.634 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.634 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.634 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.634 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.634 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.634 { 00:14:30.634 "cntlid": 111, 00:14:30.634 "qid": 0, 00:14:30.634 "state": "enabled", 00:14:30.634 "thread": "nvmf_tgt_poll_group_000", 00:14:30.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:30.634 "listen_address": { 00:14:30.634 "trtype": "TCP", 00:14:30.634 "adrfam": "IPv4", 00:14:30.634 "traddr": "10.0.0.3", 00:14:30.634 "trsvcid": "4420" 00:14:30.634 }, 00:14:30.634 "peer_address": { 00:14:30.634 "trtype": "TCP", 00:14:30.635 "adrfam": "IPv4", 00:14:30.635 "traddr": "10.0.0.1", 00:14:30.635 "trsvcid": "56312" 00:14:30.635 }, 00:14:30.635 "auth": { 00:14:30.635 "state": "completed", 00:14:30.635 "digest": "sha512", 00:14:30.635 "dhgroup": "ffdhe2048" 00:14:30.635 } 00:14:30.635 } 00:14:30.635 ]' 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.635 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.949 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:30.949 20:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:31.545 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.857 20:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.115 00:14:32.115 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.115 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.115 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.681 { 00:14:32.681 "cntlid": 113, 00:14:32.681 "qid": 0, 00:14:32.681 "state": "enabled", 00:14:32.681 "thread": "nvmf_tgt_poll_group_000", 00:14:32.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:32.681 "listen_address": { 00:14:32.681 "trtype": "TCP", 00:14:32.681 "adrfam": "IPv4", 00:14:32.681 "traddr": "10.0.0.3", 00:14:32.681 "trsvcid": "4420" 00:14:32.681 }, 00:14:32.681 "peer_address": { 00:14:32.681 "trtype": "TCP", 00:14:32.681 "adrfam": "IPv4", 00:14:32.681 "traddr": "10.0.0.1", 00:14:32.681 "trsvcid": "56340" 00:14:32.681 }, 00:14:32.681 "auth": { 00:14:32.681 "state": "completed", 00:14:32.681 "digest": "sha512", 00:14:32.681 "dhgroup": "ffdhe3072" 00:14:32.681 } 00:14:32.681 } 00:14:32.681 ]' 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.681 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.940 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:32.940 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:33.874 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.875 20:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.440 00:14:34.440 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.440 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.440 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.699 { 00:14:34.699 "cntlid": 115, 00:14:34.699 "qid": 0, 00:14:34.699 "state": "enabled", 00:14:34.699 "thread": "nvmf_tgt_poll_group_000", 00:14:34.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:34.699 "listen_address": { 00:14:34.699 "trtype": "TCP", 00:14:34.699 "adrfam": "IPv4", 00:14:34.699 "traddr": "10.0.0.3", 00:14:34.699 "trsvcid": "4420" 00:14:34.699 }, 00:14:34.699 "peer_address": { 00:14:34.699 "trtype": "TCP", 00:14:34.699 "adrfam": "IPv4", 00:14:34.699 "traddr": "10.0.0.1", 00:14:34.699 "trsvcid": "56370" 00:14:34.699 }, 00:14:34.699 "auth": { 00:14:34.699 "state": "completed", 00:14:34.699 "digest": "sha512", 00:14:34.699 "dhgroup": "ffdhe3072" 00:14:34.699 } 00:14:34.699 } 00:14:34.699 ]' 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.699 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.958 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.958 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.958 20:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.216 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:35.216 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:35.782 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.040 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:36.040 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.040 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.040 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.040 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.040 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:36.040 20:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.298 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.299 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.299 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.556 00:14:36.556 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.556 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.556 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.815 { 00:14:36.815 "cntlid": 117, 00:14:36.815 "qid": 0, 00:14:36.815 "state": "enabled", 00:14:36.815 "thread": "nvmf_tgt_poll_group_000", 00:14:36.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:36.815 "listen_address": { 00:14:36.815 "trtype": "TCP", 00:14:36.815 "adrfam": "IPv4", 00:14:36.815 "traddr": "10.0.0.3", 00:14:36.815 "trsvcid": "4420" 00:14:36.815 }, 00:14:36.815 "peer_address": { 00:14:36.815 "trtype": "TCP", 00:14:36.815 "adrfam": "IPv4", 00:14:36.815 "traddr": "10.0.0.1", 00:14:36.815 "trsvcid": "56400" 00:14:36.815 }, 00:14:36.815 "auth": { 00:14:36.815 "state": "completed", 00:14:36.815 "digest": "sha512", 00:14:36.815 "dhgroup": "ffdhe3072" 00:14:36.815 } 00:14:36.815 } 00:14:36.815 ]' 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:36.815 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.073 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.073 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.073 20:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.331 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:37.331 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:37.896 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:38.155 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:38.413 00:14:38.413 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.414 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.414 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.672 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.672 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.673 { 00:14:38.673 "cntlid": 119, 00:14:38.673 "qid": 0, 00:14:38.673 "state": "enabled", 00:14:38.673 "thread": "nvmf_tgt_poll_group_000", 00:14:38.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:38.673 "listen_address": { 00:14:38.673 "trtype": "TCP", 00:14:38.673 "adrfam": "IPv4", 00:14:38.673 "traddr": "10.0.0.3", 00:14:38.673 "trsvcid": "4420" 00:14:38.673 }, 00:14:38.673 "peer_address": { 00:14:38.673 "trtype": "TCP", 00:14:38.673 "adrfam": "IPv4", 00:14:38.673 "traddr": "10.0.0.1", 00:14:38.673 "trsvcid": "49700" 00:14:38.673 }, 00:14:38.673 "auth": { 00:14:38.673 "state": "completed", 00:14:38.673 "digest": "sha512", 00:14:38.673 "dhgroup": "ffdhe3072" 00:14:38.673 } 00:14:38.673 } 00:14:38.673 ]' 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:38.673 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.931 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.931 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.931 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.190 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:39.190 20:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:39.756 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:40.015 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:40.015 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.016 20:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.275 00:14:40.275 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.275 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.275 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.843 { 00:14:40.843 "cntlid": 121, 00:14:40.843 "qid": 0, 00:14:40.843 "state": "enabled", 00:14:40.843 "thread": "nvmf_tgt_poll_group_000", 00:14:40.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:40.843 "listen_address": { 00:14:40.843 "trtype": "TCP", 00:14:40.843 "adrfam": "IPv4", 00:14:40.843 "traddr": "10.0.0.3", 00:14:40.843 "trsvcid": "4420" 00:14:40.843 }, 00:14:40.843 "peer_address": { 00:14:40.843 "trtype": "TCP", 00:14:40.843 "adrfam": "IPv4", 00:14:40.843 "traddr": "10.0.0.1", 00:14:40.843 "trsvcid": "49732" 00:14:40.843 }, 00:14:40.843 "auth": { 00:14:40.843 "state": "completed", 00:14:40.843 "digest": "sha512", 00:14:40.843 "dhgroup": "ffdhe4096" 00:14:40.843 } 00:14:40.843 } 00:14:40.843 ]' 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.843 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.102 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:41.102 20:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:41.669 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.669 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:41.669 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.928 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.186 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.186 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.186 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.186 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.444 00:14:42.444 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.444 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.444 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.703 { 00:14:42.703 "cntlid": 123, 00:14:42.703 "qid": 0, 00:14:42.703 "state": "enabled", 00:14:42.703 "thread": "nvmf_tgt_poll_group_000", 00:14:42.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:42.703 "listen_address": { 00:14:42.703 "trtype": "TCP", 00:14:42.703 "adrfam": "IPv4", 00:14:42.703 "traddr": "10.0.0.3", 00:14:42.703 "trsvcid": "4420" 00:14:42.703 }, 00:14:42.703 "peer_address": { 00:14:42.703 "trtype": "TCP", 00:14:42.703 "adrfam": "IPv4", 00:14:42.703 "traddr": "10.0.0.1", 00:14:42.703 "trsvcid": "49776" 00:14:42.703 }, 00:14:42.703 "auth": { 00:14:42.703 "state": "completed", 00:14:42.703 "digest": "sha512", 00:14:42.703 "dhgroup": "ffdhe4096" 00:14:42.703 } 00:14:42.703 } 00:14:42.703 ]' 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.703 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.976 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.976 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.977 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.977 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.977 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.281 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:43.281 20:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.847 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.413 00:14:44.413 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.413 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.413 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.670 { 00:14:44.670 "cntlid": 125, 00:14:44.670 "qid": 0, 00:14:44.670 "state": "enabled", 00:14:44.670 "thread": "nvmf_tgt_poll_group_000", 00:14:44.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:44.670 "listen_address": { 00:14:44.670 "trtype": "TCP", 00:14:44.670 "adrfam": "IPv4", 00:14:44.670 "traddr": "10.0.0.3", 00:14:44.670 "trsvcid": "4420" 00:14:44.670 }, 00:14:44.670 "peer_address": { 00:14:44.670 "trtype": "TCP", 00:14:44.670 "adrfam": "IPv4", 00:14:44.670 "traddr": "10.0.0.1", 00:14:44.670 "trsvcid": "49800" 00:14:44.670 }, 00:14:44.670 "auth": { 00:14:44.670 "state": "completed", 00:14:44.670 "digest": "sha512", 00:14:44.670 "dhgroup": "ffdhe4096" 00:14:44.670 } 00:14:44.670 } 00:14:44.670 ]' 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.670 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.929 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.929 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.929 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.187 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:45.187 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:45.754 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:46.013 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:46.013 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.013 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.013 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.014 20:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.580 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.580 { 00:14:46.580 "cntlid": 127, 00:14:46.580 "qid": 0, 00:14:46.580 "state": "enabled", 00:14:46.580 "thread": "nvmf_tgt_poll_group_000", 00:14:46.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:46.580 "listen_address": { 00:14:46.580 "trtype": "TCP", 00:14:46.580 "adrfam": "IPv4", 00:14:46.580 "traddr": "10.0.0.3", 00:14:46.580 "trsvcid": "4420" 00:14:46.580 }, 00:14:46.580 "peer_address": { 00:14:46.580 "trtype": "TCP", 00:14:46.580 "adrfam": "IPv4", 00:14:46.580 "traddr": "10.0.0.1", 00:14:46.580 "trsvcid": "49824" 00:14:46.580 }, 00:14:46.580 "auth": { 00:14:46.580 "state": "completed", 00:14:46.580 "digest": "sha512", 00:14:46.580 "dhgroup": "ffdhe4096" 00:14:46.580 } 00:14:46.580 } 00:14:46.580 ]' 00:14:46.580 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.838 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.838 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.839 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.839 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.839 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.839 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.839 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.096 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:47.096 20:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:47.661 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.919 20:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.484 00:14:48.484 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.484 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.484 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.742 { 00:14:48.742 "cntlid": 129, 00:14:48.742 "qid": 0, 00:14:48.742 "state": "enabled", 00:14:48.742 "thread": "nvmf_tgt_poll_group_000", 00:14:48.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:48.742 "listen_address": { 00:14:48.742 "trtype": "TCP", 00:14:48.742 "adrfam": "IPv4", 00:14:48.742 "traddr": "10.0.0.3", 00:14:48.742 "trsvcid": "4420" 00:14:48.742 }, 00:14:48.742 "peer_address": { 00:14:48.742 "trtype": "TCP", 00:14:48.742 "adrfam": "IPv4", 00:14:48.742 "traddr": "10.0.0.1", 00:14:48.742 "trsvcid": "60790" 00:14:48.742 }, 00:14:48.742 "auth": { 00:14:48.742 "state": "completed", 00:14:48.742 "digest": "sha512", 00:14:48.742 "dhgroup": "ffdhe6144" 00:14:48.742 } 00:14:48.742 } 00:14:48.742 ]' 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.742 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.999 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:48.999 20:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:49.564 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:50.131 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:50.131 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.131 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.131 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.132 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.390 00:14:50.390 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.390 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.390 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.647 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.647 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.647 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.647 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.647 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.647 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.647 { 00:14:50.647 "cntlid": 131, 00:14:50.647 "qid": 0, 00:14:50.647 "state": "enabled", 00:14:50.647 "thread": "nvmf_tgt_poll_group_000", 00:14:50.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:50.647 "listen_address": { 00:14:50.647 "trtype": "TCP", 00:14:50.647 "adrfam": "IPv4", 00:14:50.647 "traddr": "10.0.0.3", 00:14:50.647 "trsvcid": "4420" 00:14:50.647 }, 00:14:50.647 "peer_address": { 00:14:50.647 "trtype": "TCP", 00:14:50.647 "adrfam": "IPv4", 00:14:50.647 "traddr": "10.0.0.1", 00:14:50.647 "trsvcid": "60828" 00:14:50.647 }, 00:14:50.647 "auth": { 00:14:50.647 "state": "completed", 00:14:50.647 "digest": "sha512", 00:14:50.647 "dhgroup": "ffdhe6144" 00:14:50.647 } 00:14:50.647 } 00:14:50.647 ]' 00:14:50.647 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.905 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.905 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.905 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.905 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.905 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.905 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.905 20:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.163 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:51.163 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:51.729 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.729 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:51.729 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.729 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.729 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.729 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.729 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:51.730 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.296 20:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.554 00:14:52.554 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.554 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.554 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.812 { 00:14:52.812 "cntlid": 133, 00:14:52.812 "qid": 0, 00:14:52.812 "state": "enabled", 00:14:52.812 "thread": "nvmf_tgt_poll_group_000", 00:14:52.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:52.812 "listen_address": { 00:14:52.812 "trtype": "TCP", 00:14:52.812 "adrfam": "IPv4", 00:14:52.812 "traddr": "10.0.0.3", 00:14:52.812 "trsvcid": "4420" 00:14:52.812 }, 00:14:52.812 "peer_address": { 00:14:52.812 "trtype": "TCP", 00:14:52.812 "adrfam": "IPv4", 00:14:52.812 "traddr": "10.0.0.1", 00:14:52.812 "trsvcid": "60848" 00:14:52.812 }, 00:14:52.812 "auth": { 00:14:52.812 "state": "completed", 00:14:52.812 "digest": "sha512", 00:14:52.812 "dhgroup": "ffdhe6144" 00:14:52.812 } 00:14:52.812 } 00:14:52.812 ]' 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.812 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.070 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.070 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.070 20:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.329 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:53.329 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:53.971 20:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:54.231 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:54.797 00:14:54.797 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.797 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.797 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.056 { 00:14:55.056 "cntlid": 135, 00:14:55.056 "qid": 0, 00:14:55.056 "state": "enabled", 00:14:55.056 "thread": "nvmf_tgt_poll_group_000", 00:14:55.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:55.056 "listen_address": { 00:14:55.056 "trtype": "TCP", 00:14:55.056 "adrfam": "IPv4", 00:14:55.056 "traddr": "10.0.0.3", 00:14:55.056 "trsvcid": "4420" 00:14:55.056 }, 00:14:55.056 "peer_address": { 00:14:55.056 "trtype": "TCP", 00:14:55.056 "adrfam": "IPv4", 00:14:55.056 "traddr": "10.0.0.1", 00:14:55.056 "trsvcid": "60876" 00:14:55.056 }, 00:14:55.056 "auth": { 00:14:55.056 "state": "completed", 00:14:55.056 "digest": "sha512", 00:14:55.056 "dhgroup": "ffdhe6144" 00:14:55.056 } 00:14:55.056 } 00:14:55.056 ]' 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.056 20:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.315 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:55.315 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:56.250 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.250 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.817 00:14:56.817 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.817 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.817 20:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.384 { 00:14:57.384 "cntlid": 137, 00:14:57.384 "qid": 0, 00:14:57.384 "state": "enabled", 00:14:57.384 "thread": "nvmf_tgt_poll_group_000", 00:14:57.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:57.384 "listen_address": { 00:14:57.384 "trtype": "TCP", 00:14:57.384 "adrfam": "IPv4", 00:14:57.384 "traddr": "10.0.0.3", 00:14:57.384 "trsvcid": "4420" 00:14:57.384 }, 00:14:57.384 "peer_address": { 00:14:57.384 "trtype": "TCP", 00:14:57.384 "adrfam": "IPv4", 00:14:57.384 "traddr": "10.0.0.1", 00:14:57.384 "trsvcid": "60908" 00:14:57.384 }, 00:14:57.384 "auth": { 00:14:57.384 "state": "completed", 00:14:57.384 "digest": "sha512", 00:14:57.384 "dhgroup": "ffdhe8192" 00:14:57.384 } 00:14:57.384 } 00:14:57.384 ]' 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.384 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.385 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.643 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:57.644 20:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.581 20:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.165 00:14:59.165 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.165 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.165 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.733 { 00:14:59.733 "cntlid": 139, 00:14:59.733 "qid": 0, 00:14:59.733 "state": "enabled", 00:14:59.733 "thread": "nvmf_tgt_poll_group_000", 00:14:59.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:14:59.733 "listen_address": { 00:14:59.733 "trtype": "TCP", 00:14:59.733 "adrfam": "IPv4", 00:14:59.733 "traddr": "10.0.0.3", 00:14:59.733 "trsvcid": "4420" 00:14:59.733 }, 00:14:59.733 "peer_address": { 00:14:59.733 "trtype": "TCP", 00:14:59.733 "adrfam": "IPv4", 00:14:59.733 "traddr": "10.0.0.1", 00:14:59.733 "trsvcid": "38916" 00:14:59.733 }, 00:14:59.733 "auth": { 00:14:59.733 "state": "completed", 00:14:59.733 "digest": "sha512", 00:14:59.733 "dhgroup": "ffdhe8192" 00:14:59.733 } 00:14:59.733 } 00:14:59.733 ]' 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.733 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.992 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:14:59.992 20:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: --dhchap-ctrl-secret DHHC-1:02:YjY4YTc0ZTg0YjJmMmM5NDc2MTJjOTk4OTZmMGI5MzMzNWZmYjBmNjZiZTFlZGIx44n4Fg==: 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.928 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.495 00:15:01.495 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.495 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.495 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.754 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.754 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.754 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.754 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.754 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.754 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.754 { 00:15:01.754 "cntlid": 141, 00:15:01.754 "qid": 0, 00:15:01.754 "state": "enabled", 00:15:01.754 "thread": "nvmf_tgt_poll_group_000", 00:15:01.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:01.754 "listen_address": { 00:15:01.754 "trtype": "TCP", 00:15:01.754 "adrfam": "IPv4", 00:15:01.754 "traddr": "10.0.0.3", 00:15:01.754 "trsvcid": "4420" 00:15:01.754 }, 00:15:01.754 "peer_address": { 00:15:01.754 "trtype": "TCP", 00:15:01.754 "adrfam": "IPv4", 00:15:01.754 "traddr": "10.0.0.1", 00:15:01.754 "trsvcid": "38940" 00:15:01.754 }, 00:15:01.754 "auth": { 00:15:01.754 "state": "completed", 00:15:01.754 "digest": "sha512", 00:15:01.754 "dhgroup": "ffdhe8192" 00:15:01.754 } 00:15:01.754 } 00:15:01.754 ]' 00:15:01.754 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.014 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.014 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.014 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.014 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.014 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.014 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.014 20:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.273 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:15:02.274 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:01:NmM5YTRkNzg4YjdjZWM4MjcxMmNkMjQxYzg1ODFhN2P3y4Ex: 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:02.840 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.098 20:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.098 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.098 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.098 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.098 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.666 00:15:03.666 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.666 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.666 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.925 { 00:15:03.925 "cntlid": 143, 00:15:03.925 "qid": 0, 00:15:03.925 "state": "enabled", 00:15:03.925 "thread": "nvmf_tgt_poll_group_000", 00:15:03.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:03.925 "listen_address": { 00:15:03.925 "trtype": "TCP", 00:15:03.925 "adrfam": "IPv4", 00:15:03.925 "traddr": "10.0.0.3", 00:15:03.925 "trsvcid": "4420" 00:15:03.925 }, 00:15:03.925 "peer_address": { 00:15:03.925 "trtype": "TCP", 00:15:03.925 "adrfam": "IPv4", 00:15:03.925 "traddr": "10.0.0.1", 00:15:03.925 "trsvcid": "38976" 00:15:03.925 }, 00:15:03.925 "auth": { 00:15:03.925 "state": "completed", 00:15:03.925 "digest": "sha512", 00:15:03.925 "dhgroup": "ffdhe8192" 00:15:03.925 } 00:15:03.925 } 00:15:03.925 ]' 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.925 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.184 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:04.184 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.184 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.184 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.184 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.441 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:15:04.441 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:05.006 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:05.264 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.265 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.832 00:15:05.832 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.832 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.832 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.091 { 00:15:06.091 "cntlid": 145, 00:15:06.091 "qid": 0, 00:15:06.091 "state": "enabled", 00:15:06.091 "thread": "nvmf_tgt_poll_group_000", 00:15:06.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:06.091 "listen_address": { 00:15:06.091 "trtype": "TCP", 00:15:06.091 "adrfam": "IPv4", 00:15:06.091 "traddr": "10.0.0.3", 00:15:06.091 "trsvcid": "4420" 00:15:06.091 }, 00:15:06.091 "peer_address": { 00:15:06.091 "trtype": "TCP", 00:15:06.091 "adrfam": "IPv4", 00:15:06.091 "traddr": "10.0.0.1", 00:15:06.091 "trsvcid": "39014" 00:15:06.091 }, 00:15:06.091 "auth": { 00:15:06.091 "state": "completed", 00:15:06.091 "digest": "sha512", 00:15:06.091 "dhgroup": "ffdhe8192" 00:15:06.091 } 00:15:06.091 } 00:15:06.091 ]' 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.091 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.091 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.091 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.350 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.350 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.350 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.610 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:15:06.610 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:00:MDE3MjVlOGU5ZmYyMGY4NzNlZDQxZmEzODE4NGU5YzRkNzQxMjlhYzQ4Njc3Mzcyww816A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4NGMyNTkzNWMxNmU2OTRmYTkxYjdjOWU2MGY5OWY4OTc2NWQzMzRjMTI0NTZjYmMxZGJlMjRmMDdlNTRkZRKhdRk=: 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:07.176 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:07.177 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:07.743 request: 00:15:07.743 { 00:15:07.743 "name": "nvme0", 00:15:07.743 "trtype": "tcp", 00:15:07.743 "traddr": "10.0.0.3", 00:15:07.743 "adrfam": "ipv4", 00:15:07.743 "trsvcid": "4420", 00:15:07.743 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:07.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:07.743 "prchk_reftag": false, 00:15:07.743 "prchk_guard": false, 00:15:07.743 "hdgst": false, 00:15:07.743 "ddgst": false, 00:15:07.743 "dhchap_key": "key2", 00:15:07.743 "allow_unrecognized_csi": false, 00:15:07.743 "method": "bdev_nvme_attach_controller", 00:15:07.743 "req_id": 1 00:15:07.743 } 00:15:07.743 Got JSON-RPC error response 00:15:07.743 response: 00:15:07.743 { 00:15:07.743 "code": -5, 00:15:07.743 "message": "Input/output error" 00:15:07.743 } 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.743 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:07.744 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:08.310 request: 00:15:08.310 { 00:15:08.310 "name": "nvme0", 00:15:08.310 "trtype": "tcp", 00:15:08.310 "traddr": "10.0.0.3", 00:15:08.310 "adrfam": "ipv4", 00:15:08.310 "trsvcid": "4420", 00:15:08.310 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:08.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:08.310 "prchk_reftag": false, 00:15:08.310 "prchk_guard": false, 00:15:08.310 "hdgst": false, 00:15:08.310 "ddgst": false, 00:15:08.310 "dhchap_key": "key1", 00:15:08.310 "dhchap_ctrlr_key": "ckey2", 00:15:08.310 "allow_unrecognized_csi": false, 00:15:08.310 "method": "bdev_nvme_attach_controller", 00:15:08.310 "req_id": 1 00:15:08.310 } 00:15:08.310 Got JSON-RPC error response 00:15:08.310 response: 00:15:08.310 { 00:15:08.310 "code": -5, 00:15:08.310 "message": "Input/output error" 00:15:08.310 } 00:15:08.310 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:08.310 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.310 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.310 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.310 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:08.310 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.310 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.311 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.878 request: 00:15:08.878 { 00:15:08.878 "name": "nvme0", 00:15:08.878 "trtype": "tcp", 00:15:08.878 "traddr": "10.0.0.3", 00:15:08.878 "adrfam": "ipv4", 00:15:08.878 "trsvcid": "4420", 00:15:08.878 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:08.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:08.878 "prchk_reftag": false, 00:15:08.878 "prchk_guard": false, 00:15:08.878 "hdgst": false, 00:15:08.878 "ddgst": false, 00:15:08.878 "dhchap_key": "key1", 00:15:08.878 "dhchap_ctrlr_key": "ckey1", 00:15:08.878 "allow_unrecognized_csi": false, 00:15:08.878 "method": "bdev_nvme_attach_controller", 00:15:08.878 "req_id": 1 00:15:08.878 } 00:15:08.878 Got JSON-RPC error response 00:15:08.878 response: 00:15:08.878 { 00:15:08.878 "code": -5, 00:15:08.878 "message": "Input/output error" 00:15:08.878 } 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67638 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67638 ']' 00:15:08.878 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67638 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67638 00:15:08.879 killing process with pid 67638 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67638' 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67638 00:15:08.879 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67638 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70758 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70758 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70758 ']' 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.137 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70758 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70758 ']' 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.514 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 null0 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Akr 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Bho ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Bho 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xDg 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.zYK ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zYK 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.IsR 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Zgq ]] 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zgq 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.773 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0Zy 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.774 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.720 nvme0n1 00:15:11.720 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.720 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.720 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.008 { 00:15:12.008 "cntlid": 1, 00:15:12.008 "qid": 0, 00:15:12.008 "state": "enabled", 00:15:12.008 "thread": "nvmf_tgt_poll_group_000", 00:15:12.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:12.008 "listen_address": { 00:15:12.008 "trtype": "TCP", 00:15:12.008 "adrfam": "IPv4", 00:15:12.008 "traddr": "10.0.0.3", 00:15:12.008 "trsvcid": "4420" 00:15:12.008 }, 00:15:12.008 "peer_address": { 00:15:12.008 "trtype": "TCP", 00:15:12.008 "adrfam": "IPv4", 00:15:12.008 "traddr": "10.0.0.1", 00:15:12.008 "trsvcid": "45590" 00:15:12.008 }, 00:15:12.008 "auth": { 00:15:12.008 "state": "completed", 00:15:12.008 "digest": "sha512", 00:15:12.008 "dhgroup": "ffdhe8192" 00:15:12.008 } 00:15:12.008 } 00:15:12.008 ]' 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:12.008 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.290 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.290 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.290 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.548 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:15:12.548 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:15:13.112 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key3 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:13.112 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.369 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.626 request: 00:15:13.626 { 00:15:13.626 "name": "nvme0", 00:15:13.626 "trtype": "tcp", 00:15:13.626 "traddr": "10.0.0.3", 00:15:13.626 "adrfam": "ipv4", 00:15:13.626 "trsvcid": "4420", 00:15:13.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:13.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:13.626 "prchk_reftag": false, 00:15:13.626 "prchk_guard": false, 00:15:13.626 "hdgst": false, 00:15:13.626 "ddgst": false, 00:15:13.626 "dhchap_key": "key3", 00:15:13.626 "allow_unrecognized_csi": false, 00:15:13.626 "method": "bdev_nvme_attach_controller", 00:15:13.626 "req_id": 1 00:15:13.626 } 00:15:13.626 Got JSON-RPC error response 00:15:13.626 response: 00:15:13.626 { 00:15:13.626 "code": -5, 00:15:13.626 "message": "Input/output error" 00:15:13.626 } 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:13.626 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.884 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.142 request: 00:15:14.142 { 00:15:14.142 "name": "nvme0", 00:15:14.142 "trtype": "tcp", 00:15:14.142 "traddr": "10.0.0.3", 00:15:14.142 "adrfam": "ipv4", 00:15:14.142 "trsvcid": "4420", 00:15:14.142 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:14.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:14.142 "prchk_reftag": false, 00:15:14.142 "prchk_guard": false, 00:15:14.142 "hdgst": false, 00:15:14.142 "ddgst": false, 00:15:14.142 "dhchap_key": "key3", 00:15:14.142 "allow_unrecognized_csi": false, 00:15:14.142 "method": "bdev_nvme_attach_controller", 00:15:14.142 "req_id": 1 00:15:14.142 } 00:15:14.142 Got JSON-RPC error response 00:15:14.142 response: 00:15:14.142 { 00:15:14.142 "code": -5, 00:15:14.142 "message": "Input/output error" 00:15:14.142 } 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.400 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.966 request: 00:15:14.966 { 00:15:14.966 "name": "nvme0", 00:15:14.966 "trtype": "tcp", 00:15:14.966 "traddr": "10.0.0.3", 00:15:14.966 "adrfam": "ipv4", 00:15:14.966 "trsvcid": "4420", 00:15:14.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:14.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:14.966 "prchk_reftag": false, 00:15:14.966 "prchk_guard": false, 00:15:14.966 "hdgst": false, 00:15:14.966 "ddgst": false, 00:15:14.966 "dhchap_key": "key0", 00:15:14.966 "dhchap_ctrlr_key": "key1", 00:15:14.966 "allow_unrecognized_csi": false, 00:15:14.966 "method": "bdev_nvme_attach_controller", 00:15:14.966 "req_id": 1 00:15:14.966 } 00:15:14.966 Got JSON-RPC error response 00:15:14.966 response: 00:15:14.966 { 00:15:14.966 "code": -5, 00:15:14.966 "message": "Input/output error" 00:15:14.966 } 00:15:14.966 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:14.966 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.966 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.966 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.966 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:14.966 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:14.966 20:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:15.224 nvme0n1 00:15:15.224 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:15.224 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:15.224 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.482 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.482 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.482 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.739 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 00:15:15.739 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.739 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.739 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.739 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:15.739 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:15.739 20:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:16.674 nvme0n1 00:15:16.674 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:16.674 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:16.674 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.933 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:17.191 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.191 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:15:17.191 20:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid 5b7a0101-ee75-44bd-b64f-b6a56d193f2b -l 0 --dhchap-secret DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: --dhchap-ctrl-secret DHHC-1:03:NGM1YTRlNjkyYjkzZTc2Y2YzZGNiNzI0NWY5MmZhNmE3NzdhMTE4NTNmNTMzYmNlNDI2NjBiZWI0NzRmMjUwOfXqmRU=: 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.758 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:18.054 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:18.624 request: 00:15:18.624 { 00:15:18.624 "name": "nvme0", 00:15:18.624 "trtype": "tcp", 00:15:18.624 "traddr": "10.0.0.3", 00:15:18.624 "adrfam": "ipv4", 00:15:18.624 "trsvcid": "4420", 00:15:18.624 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:18.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b", 00:15:18.624 "prchk_reftag": false, 00:15:18.624 "prchk_guard": false, 00:15:18.624 "hdgst": false, 00:15:18.624 "ddgst": false, 00:15:18.624 "dhchap_key": "key1", 00:15:18.624 "allow_unrecognized_csi": false, 00:15:18.624 "method": "bdev_nvme_attach_controller", 00:15:18.624 "req_id": 1 00:15:18.624 } 00:15:18.624 Got JSON-RPC error response 00:15:18.624 response: 00:15:18.624 { 00:15:18.624 "code": -5, 00:15:18.624 "message": "Input/output error" 00:15:18.624 } 00:15:18.624 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:18.624 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.624 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.624 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.624 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:18.624 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:18.624 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:19.559 nvme0n1 00:15:19.559 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:19.559 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.559 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:19.559 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.559 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.559 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.816 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:19.816 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.816 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.816 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.816 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:19.816 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:19.816 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:20.074 nvme0n1 00:15:20.074 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:20.074 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:20.074 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.332 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.332 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.332 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: '' 2s 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: ]] 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTgxODEwNzM3NGU1M2M2NDJmZjkwNGI2N2Y4ZjRkMzEJJzKk: 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:20.590 20:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: 2s 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: ]] 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWE0MjY1NjEzMWEyYTU3YTczYzZkZWNiOTljM2U1MWFmZTJiMzY3NDI4ODJlMDQ4yd9QJg==: 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:23.134 20:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:25.036 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:25.037 20:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:25.972 nvme0n1 00:15:25.972 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.972 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.972 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.972 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.972 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.972 20:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:26.230 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:26.230 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:26.230 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.797 20:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:27.363 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.363 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:27.363 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.363 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.363 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.363 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.363 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:27.364 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.364 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:27.364 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.364 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:27.364 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.364 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.364 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:27.622 request: 00:15:27.622 { 00:15:27.622 "name": "nvme0", 00:15:27.622 "dhchap_key": "key1", 00:15:27.622 "dhchap_ctrlr_key": "key3", 00:15:27.622 "method": "bdev_nvme_set_keys", 00:15:27.622 "req_id": 1 00:15:27.622 } 00:15:27.622 Got JSON-RPC error response 00:15:27.622 response: 00:15:27.622 { 00:15:27.622 "code": -13, 00:15:27.622 "message": "Permission denied" 00:15:27.622 } 00:15:27.881 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:27.881 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.881 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.881 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.881 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:27.881 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.881 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:28.139 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:28.139 20:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:29.074 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:29.074 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:29.074 20:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:29.332 20:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:30.310 nvme0n1 00:15:30.310 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:30.310 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.311 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:30.878 request: 00:15:30.878 { 00:15:30.878 "name": "nvme0", 00:15:30.878 "dhchap_key": "key2", 00:15:30.878 "dhchap_ctrlr_key": "key0", 00:15:30.878 "method": "bdev_nvme_set_keys", 00:15:30.878 "req_id": 1 00:15:30.878 } 00:15:30.878 Got JSON-RPC error response 00:15:30.878 response: 00:15:30.878 { 00:15:30.878 "code": -13, 00:15:30.878 "message": "Permission denied" 00:15:30.878 } 00:15:30.878 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:30.878 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:30.878 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:30.878 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:30.878 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:30.878 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.878 20:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:31.136 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:31.136 20:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:32.072 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:32.072 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.072 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:32.331 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:32.331 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:32.331 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:32.331 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67668 00:15:32.331 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67668 ']' 00:15:32.331 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67668 00:15:32.331 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:32.590 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.590 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67668 00:15:32.590 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:32.590 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:32.590 killing process with pid 67668 00:15:32.590 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67668' 00:15:32.590 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67668 00:15:32.590 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67668 00:15:32.848 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:32.848 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.848 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:32.848 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.848 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:32.848 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.848 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.849 rmmod nvme_tcp 00:15:32.849 rmmod nvme_fabrics 00:15:32.849 rmmod nvme_keyring 00:15:32.849 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.849 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:32.849 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:32.849 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70758 ']' 00:15:32.849 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70758 00:15:32.849 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70758 ']' 00:15:32.849 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70758 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70758 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70758' 00:15:33.107 killing process with pid 70758 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70758 00:15:33.107 20:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70758 00:15:33.365 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.366 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Akr /tmp/spdk.key-sha256.xDg /tmp/spdk.key-sha384.IsR /tmp/spdk.key-sha512.0Zy /tmp/spdk.key-sha512.Bho /tmp/spdk.key-sha384.zYK /tmp/spdk.key-sha256.Zgq '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:33.624 00:15:33.624 real 3m15.559s 00:15:33.624 user 7m35.566s 00:15:33.624 sys 0m42.191s 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.624 ************************************ 00:15:33.624 END TEST nvmf_auth_target 00:15:33.624 ************************************ 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:33.624 20:43:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.625 ************************************ 00:15:33.625 START TEST nvmf_bdevio_no_huge 00:15:33.625 ************************************ 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:33.625 * Looking for test storage... 00:15:33.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.625 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.885 --rc genhtml_branch_coverage=1 00:15:33.885 --rc genhtml_function_coverage=1 00:15:33.885 --rc genhtml_legend=1 00:15:33.885 --rc geninfo_all_blocks=1 00:15:33.885 --rc geninfo_unexecuted_blocks=1 00:15:33.885 00:15:33.885 ' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.885 --rc genhtml_branch_coverage=1 00:15:33.885 --rc genhtml_function_coverage=1 00:15:33.885 --rc genhtml_legend=1 00:15:33.885 --rc geninfo_all_blocks=1 00:15:33.885 --rc geninfo_unexecuted_blocks=1 00:15:33.885 00:15:33.885 ' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.885 --rc genhtml_branch_coverage=1 00:15:33.885 --rc genhtml_function_coverage=1 00:15:33.885 --rc genhtml_legend=1 00:15:33.885 --rc geninfo_all_blocks=1 00:15:33.885 --rc geninfo_unexecuted_blocks=1 00:15:33.885 00:15:33.885 ' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.885 --rc genhtml_branch_coverage=1 00:15:33.885 --rc genhtml_function_coverage=1 00:15:33.885 --rc genhtml_legend=1 00:15:33.885 --rc geninfo_all_blocks=1 00:15:33.885 --rc geninfo_unexecuted_blocks=1 00:15:33.885 00:15:33.885 ' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.885 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:33.885 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:33.886 Cannot find device "nvmf_init_br" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:33.886 Cannot find device "nvmf_init_br2" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:33.886 Cannot find device "nvmf_tgt_br" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.886 Cannot find device "nvmf_tgt_br2" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:33.886 Cannot find device "nvmf_init_br" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:33.886 Cannot find device "nvmf_init_br2" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:33.886 Cannot find device "nvmf_tgt_br" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:33.886 Cannot find device "nvmf_tgt_br2" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:33.886 Cannot find device "nvmf_br" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:33.886 Cannot find device "nvmf_init_if" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:33.886 Cannot find device "nvmf_init_if2" 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.886 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.146 20:43:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:34.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:34.146 00:15:34.146 --- 10.0.0.3 ping statistics --- 00:15:34.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.146 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:34.146 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:34.146 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:34.146 00:15:34.146 --- 10.0.0.4 ping statistics --- 00:15:34.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.146 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:15:34.146 00:15:34.146 --- 10.0.0.1 ping statistics --- 00:15:34.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.146 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:34.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:15:34.146 00:15:34.146 --- 10.0.0.2 ping statistics --- 00:15:34.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.146 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71407 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71407 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71407 ']' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.146 20:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.412 [2024-11-26 20:43:29.153627] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:34.413 [2024-11-26 20:43:29.153736] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:34.413 [2024-11-26 20:43:29.329649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.673 [2024-11-26 20:43:29.426995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.673 [2024-11-26 20:43:29.427063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.673 [2024-11-26 20:43:29.427080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.673 [2024-11-26 20:43:29.427094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.673 [2024-11-26 20:43:29.427105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.673 [2024-11-26 20:43:29.428121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:34.673 [2024-11-26 20:43:29.428257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:34.673 [2024-11-26 20:43:29.428330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:34.673 [2024-11-26 20:43:29.428339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.673 [2024-11-26 20:43:29.435112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.243 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.243 [2024-11-26 20:43:30.230815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.502 Malloc0 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.502 [2024-11-26 20:43:30.276293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:35.502 { 00:15:35.502 "params": { 00:15:35.502 "name": "Nvme$subsystem", 00:15:35.502 "trtype": "$TEST_TRANSPORT", 00:15:35.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:35.502 "adrfam": "ipv4", 00:15:35.502 "trsvcid": "$NVMF_PORT", 00:15:35.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:35.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:35.502 "hdgst": ${hdgst:-false}, 00:15:35.502 "ddgst": ${ddgst:-false} 00:15:35.502 }, 00:15:35.502 "method": "bdev_nvme_attach_controller" 00:15:35.502 } 00:15:35.502 EOF 00:15:35.502 )") 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:35.502 20:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:35.502 "params": { 00:15:35.502 "name": "Nvme1", 00:15:35.502 "trtype": "tcp", 00:15:35.502 "traddr": "10.0.0.3", 00:15:35.502 "adrfam": "ipv4", 00:15:35.502 "trsvcid": "4420", 00:15:35.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:35.502 "hdgst": false, 00:15:35.502 "ddgst": false 00:15:35.502 }, 00:15:35.502 "method": "bdev_nvme_attach_controller" 00:15:35.502 }' 00:15:35.502 [2024-11-26 20:43:30.340572] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:35.502 [2024-11-26 20:43:30.340667] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71444 ] 00:15:35.762 [2024-11-26 20:43:30.507920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.762 [2024-11-26 20:43:30.608596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.762 [2024-11-26 20:43:30.608753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.762 [2024-11-26 20:43:30.608759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.762 [2024-11-26 20:43:30.623573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.029 I/O targets: 00:15:36.029 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:36.029 00:15:36.029 00:15:36.029 CUnit - A unit testing framework for C - Version 2.1-3 00:15:36.029 http://cunit.sourceforge.net/ 00:15:36.029 00:15:36.029 00:15:36.029 Suite: bdevio tests on: Nvme1n1 00:15:36.029 Test: blockdev write read block ...passed 00:15:36.029 Test: blockdev write zeroes read block ...passed 00:15:36.029 Test: blockdev write zeroes read no split ...passed 00:15:36.029 Test: blockdev write zeroes read split ...passed 00:15:36.029 Test: blockdev write zeroes read split partial ...passed 00:15:36.029 Test: blockdev reset ...[2024-11-26 20:43:30.924349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:36.029 [2024-11-26 20:43:30.924465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a3320 (9): Bad file descriptor 00:15:36.029 passed 00:15:36.029 Test: blockdev write read 8 blocks ...[2024-11-26 20:43:30.942077] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:36.029 passed 00:15:36.029 Test: blockdev write read size > 128k ...passed 00:15:36.029 Test: blockdev write read invalid size ...passed 00:15:36.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.029 Test: blockdev write read max offset ...passed 00:15:36.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:36.029 Test: blockdev writev readv 8 blocks ...passed 00:15:36.029 Test: blockdev writev readv 30 x 1block ...passed 00:15:36.029 Test: blockdev writev readv block ...passed 00:15:36.029 Test: blockdev writev readv size > 128k ...passed 00:15:36.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:36.029 Test: blockdev comparev and writev ...[2024-11-26 20:43:30.948530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.948570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.948588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.948599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:36.029 passed 00:15:36.029 Test: blockdev nvme passthru rw ...[2024-11-26 20:43:30.949025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.949041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.949055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.949065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.949292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.949304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.949318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.949327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.949551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.949562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.949575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:36.029 [2024-11-26 20:43:30.949584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:36.029 passed 00:15:36.029 Test: blockdev nvme passthru vendor specific ...passed 00:15:36.029 Test: blockdev nvme admin passthru ...[2024-11-26 20:43:30.950280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:36.029 [2024-11-26 20:43:30.950300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.950393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:36.029 [2024-11-26 20:43:30.950404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.950480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:36.029 [2024-11-26 20:43:30.950491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:36.029 [2024-11-26 20:43:30.950566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:36.029 [2024-11-26 20:43:30.950577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:36.029 passed 00:15:36.029 Test: blockdev copy ...passed 00:15:36.029 00:15:36.029 Run Summary: Type Total Ran Passed Failed Inactive 00:15:36.029 suites 1 1 n/a 0 0 00:15:36.029 tests 23 23 23 0 0 00:15:36.029 asserts 152 152 152 0 n/a 00:15:36.029 00:15:36.029 Elapsed time = 0.167 seconds 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:36.598 rmmod nvme_tcp 00:15:36.598 rmmod nvme_fabrics 00:15:36.598 rmmod nvme_keyring 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71407 ']' 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71407 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71407 ']' 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71407 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71407 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:36.598 killing process with pid 71407 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71407' 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71407 00:15:36.598 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71407 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:37.165 20:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.165 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:37.423 00:15:37.423 real 0m3.744s 00:15:37.423 user 0m11.429s 00:15:37.423 sys 0m1.693s 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:37.423 ************************************ 00:15:37.423 END TEST nvmf_bdevio_no_huge 00:15:37.423 ************************************ 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.423 ************************************ 00:15:37.423 START TEST nvmf_tls 00:15:37.423 ************************************ 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:37.423 * Looking for test storage... 00:15:37.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.423 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.683 --rc genhtml_branch_coverage=1 00:15:37.683 --rc genhtml_function_coverage=1 00:15:37.683 --rc genhtml_legend=1 00:15:37.683 --rc geninfo_all_blocks=1 00:15:37.683 --rc geninfo_unexecuted_blocks=1 00:15:37.683 00:15:37.683 ' 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.683 --rc genhtml_branch_coverage=1 00:15:37.683 --rc genhtml_function_coverage=1 00:15:37.683 --rc genhtml_legend=1 00:15:37.683 --rc geninfo_all_blocks=1 00:15:37.683 --rc geninfo_unexecuted_blocks=1 00:15:37.683 00:15:37.683 ' 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.683 --rc genhtml_branch_coverage=1 00:15:37.683 --rc genhtml_function_coverage=1 00:15:37.683 --rc genhtml_legend=1 00:15:37.683 --rc geninfo_all_blocks=1 00:15:37.683 --rc geninfo_unexecuted_blocks=1 00:15:37.683 00:15:37.683 ' 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.683 --rc genhtml_branch_coverage=1 00:15:37.683 --rc genhtml_function_coverage=1 00:15:37.683 --rc genhtml_legend=1 00:15:37.683 --rc geninfo_all_blocks=1 00:15:37.683 --rc geninfo_unexecuted_blocks=1 00:15:37.683 00:15:37.683 ' 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.683 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.684 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:37.684 Cannot find device "nvmf_init_br" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:37.684 Cannot find device "nvmf_init_br2" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:37.684 Cannot find device "nvmf_tgt_br" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.684 Cannot find device "nvmf_tgt_br2" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:37.684 Cannot find device "nvmf_init_br" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:37.684 Cannot find device "nvmf_init_br2" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:37.684 Cannot find device "nvmf_tgt_br" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:37.684 Cannot find device "nvmf_tgt_br2" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:37.684 Cannot find device "nvmf_br" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:37.684 Cannot find device "nvmf_init_if" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:37.684 Cannot find device "nvmf_init_if2" 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:37.684 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:37.943 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:37.944 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.944 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:37.944 00:15:37.944 --- 10.0.0.3 ping statistics --- 00:15:37.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.944 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:37.944 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:37.944 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:15:37.944 00:15:37.944 --- 10.0.0.4 ping statistics --- 00:15:37.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.944 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:15:37.944 00:15:37.944 --- 10.0.0.1 ping statistics --- 00:15:37.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.944 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:37.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:37.944 00:15:37.944 --- 10.0.0.2 ping statistics --- 00:15:37.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.944 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.944 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71675 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71675 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71675 ']' 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.202 20:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.202 [2024-11-26 20:43:32.995888] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:38.202 [2024-11-26 20:43:32.996668] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.202 [2024-11-26 20:43:33.159545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.461 [2024-11-26 20:43:33.236699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.461 [2024-11-26 20:43:33.236766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.461 [2024-11-26 20:43:33.236782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.461 [2024-11-26 20:43:33.236797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.461 [2024-11-26 20:43:33.236808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.461 [2024-11-26 20:43:33.237199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.046 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.046 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:39.046 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:39.046 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:39.046 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.305 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.305 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:39.305 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:39.564 true 00:15:39.564 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:39.564 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:39.821 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:39.822 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:39.822 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:40.079 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:40.079 20:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.338 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:40.338 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:40.338 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:40.596 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.596 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:40.855 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:40.855 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:40.855 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.855 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:41.114 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:41.114 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:41.114 20:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:41.114 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:41.114 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:41.373 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:41.373 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:41.373 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:41.631 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:41.631 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:41.890 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.vVAXMftipm 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.r6YkqMnUH5 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.vVAXMftipm 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.r6YkqMnUH5 00:15:42.148 20:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:42.406 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:42.663 [2024-11-26 20:43:37.523723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.663 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.vVAXMftipm 00:15:42.663 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vVAXMftipm 00:15:42.664 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:42.922 [2024-11-26 20:43:37.881143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.922 20:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:43.179 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:43.438 [2024-11-26 20:43:38.301219] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:43.438 [2024-11-26 20:43:38.301497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:43.438 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:43.696 malloc0 00:15:43.696 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:43.954 20:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vVAXMftipm 00:15:44.210 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:44.467 20:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.vVAXMftipm 00:15:56.660 Initializing NVMe Controllers 00:15:56.660 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:56.660 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:56.660 Initialization complete. Launching workers. 00:15:56.660 ======================================================== 00:15:56.660 Latency(us) 00:15:56.660 Device Information : IOPS MiB/s Average min max 00:15:56.660 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13061.09 51.02 4900.68 1778.88 10302.18 00:15:56.660 ======================================================== 00:15:56.660 Total : 13061.09 51.02 4900.68 1778.88 10302.18 00:15:56.660 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVAXMftipm 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vVAXMftipm 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71914 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71914 /var/tmp/bdevperf.sock 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71914 ']' 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.660 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:56.660 [2024-11-26 20:43:49.525764] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:56.660 [2024-11-26 20:43:49.525879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71914 ] 00:15:56.660 [2024-11-26 20:43:49.683010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.660 [2024-11-26 20:43:49.747734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.660 [2024-11-26 20:43:49.797494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.660 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.660 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:56.660 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vVAXMftipm 00:15:56.660 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:56.660 [2024-11-26 20:43:50.918325] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:56.660 TLSTESTn1 00:15:56.660 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:56.660 Running I/O for 10 seconds... 00:15:58.165 5638.00 IOPS, 22.02 MiB/s [2024-11-26T20:43:54.533Z] 5681.00 IOPS, 22.19 MiB/s [2024-11-26T20:43:55.469Z] 5688.00 IOPS, 22.22 MiB/s [2024-11-26T20:43:56.416Z] 5687.00 IOPS, 22.21 MiB/s [2024-11-26T20:43:57.383Z] 5694.80 IOPS, 22.25 MiB/s [2024-11-26T20:43:58.320Z] 5681.33 IOPS, 22.19 MiB/s [2024-11-26T20:43:59.257Z] 5681.29 IOPS, 22.19 MiB/s [2024-11-26T20:44:00.194Z] 5686.50 IOPS, 22.21 MiB/s [2024-11-26T20:44:01.130Z] 5686.67 IOPS, 22.21 MiB/s [2024-11-26T20:44:01.130Z] 5690.00 IOPS, 22.23 MiB/s 00:16:06.138 Latency(us) 00:16:06.138 [2024-11-26T20:44:01.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.138 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:06.138 Verification LBA range: start 0x0 length 0x2000 00:16:06.138 TLSTESTn1 : 10.01 5695.28 22.25 0.00 0.00 22439.56 4868.39 16352.79 00:16:06.138 [2024-11-26T20:44:01.131Z] =================================================================================================================== 00:16:06.138 [2024-11-26T20:44:01.131Z] Total : 5695.28 22.25 0.00 0.00 22439.56 4868.39 16352.79 00:16:06.138 { 00:16:06.138 "results": [ 00:16:06.138 { 00:16:06.138 "job": "TLSTESTn1", 00:16:06.138 "core_mask": "0x4", 00:16:06.138 "workload": "verify", 00:16:06.138 "status": "finished", 00:16:06.138 "verify_range": { 00:16:06.138 "start": 0, 00:16:06.138 "length": 8192 00:16:06.138 }, 00:16:06.138 "queue_depth": 128, 00:16:06.138 "io_size": 4096, 00:16:06.138 "runtime": 10.013029, 00:16:06.138 "iops": 5695.279620182864, 00:16:06.138 "mibps": 22.247186016339313, 00:16:06.138 "io_failed": 0, 00:16:06.138 "io_timeout": 0, 00:16:06.138 "avg_latency_us": 22439.561106142704, 00:16:06.138 "min_latency_us": 4868.388571428572, 00:16:06.138 "max_latency_us": 16352.792380952382 00:16:06.138 } 00:16:06.138 ], 00:16:06.138 "core_count": 1 00:16:06.138 } 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71914 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71914 ']' 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71914 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71914 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:06.396 killing process with pid 71914 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71914' 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71914 00:16:06.396 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.396 00:16:06.396 Latency(us) 00:16:06.396 [2024-11-26T20:44:01.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.396 [2024-11-26T20:44:01.389Z] =================================================================================================================== 00:16:06.396 [2024-11-26T20:44:01.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71914 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r6YkqMnUH5 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r6YkqMnUH5 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r6YkqMnUH5 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.r6YkqMnUH5 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72049 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72049 /var/tmp/bdevperf.sock 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72049 ']' 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.396 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.655 [2024-11-26 20:44:01.425675] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:06.655 [2024-11-26 20:44:01.425786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72049 ] 00:16:06.655 [2024-11-26 20:44:01.577961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.655 [2024-11-26 20:44:01.631712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.914 [2024-11-26 20:44:01.675278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:07.483 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.483 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:07.483 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.r6YkqMnUH5 00:16:07.742 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:08.001 [2024-11-26 20:44:02.967231] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:08.001 [2024-11-26 20:44:02.972306] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:08.002 [2024-11-26 20:44:02.972714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd2ff0 (107): Transport endpoint is not connected 00:16:08.002 [2024-11-26 20:44:02.973702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd2ff0 (9): Bad file descriptor 00:16:08.002 [2024-11-26 20:44:02.974701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:08.002 [2024-11-26 20:44:02.974721] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:08.002 [2024-11-26 20:44:02.974732] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:08.002 [2024-11-26 20:44:02.974747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:08.002 request: 00:16:08.002 { 00:16:08.002 "name": "TLSTEST", 00:16:08.002 "trtype": "tcp", 00:16:08.002 "traddr": "10.0.0.3", 00:16:08.002 "adrfam": "ipv4", 00:16:08.002 "trsvcid": "4420", 00:16:08.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:08.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:08.002 "prchk_reftag": false, 00:16:08.002 "prchk_guard": false, 00:16:08.002 "hdgst": false, 00:16:08.002 "ddgst": false, 00:16:08.002 "psk": "key0", 00:16:08.002 "allow_unrecognized_csi": false, 00:16:08.002 "method": "bdev_nvme_attach_controller", 00:16:08.002 "req_id": 1 00:16:08.002 } 00:16:08.002 Got JSON-RPC error response 00:16:08.002 response: 00:16:08.002 { 00:16:08.002 "code": -5, 00:16:08.002 "message": "Input/output error" 00:16:08.002 } 00:16:08.261 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72049 00:16:08.261 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72049 ']' 00:16:08.261 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72049 00:16:08.261 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:08.261 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72049 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:08.262 killing process with pid 72049 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72049' 00:16:08.262 Received shutdown signal, test time was about 10.000000 seconds 00:16:08.262 00:16:08.262 Latency(us) 00:16:08.262 [2024-11-26T20:44:03.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.262 [2024-11-26T20:44:03.255Z] =================================================================================================================== 00:16:08.262 [2024-11-26T20:44:03.255Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72049 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72049 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vVAXMftipm 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vVAXMftipm 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vVAXMftipm 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vVAXMftipm 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72084 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72084 /var/tmp/bdevperf.sock 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72084 ']' 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:08.521 [2024-11-26 20:44:03.282385] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:08.521 [2024-11-26 20:44:03.282495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72084 ] 00:16:08.521 [2024-11-26 20:44:03.435198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.521 [2024-11-26 20:44:03.485657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.780 [2024-11-26 20:44:03.529747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.359 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.359 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:09.359 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vVAXMftipm 00:16:09.617 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:09.875 [2024-11-26 20:44:04.746270] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.876 [2024-11-26 20:44:04.752398] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:09.876 [2024-11-26 20:44:04.752438] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:09.876 [2024-11-26 20:44:04.752483] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:09.876 [2024-11-26 20:44:04.752721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198eff0 (107): Transport endpoint is not connected 00:16:09.876 [2024-11-26 20:44:04.753711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198eff0 (9): Bad file descriptor 00:16:09.876 [2024-11-26 20:44:04.754710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:09.876 [2024-11-26 20:44:04.754733] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:09.876 [2024-11-26 20:44:04.754743] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:09.876 [2024-11-26 20:44:04.754757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:09.876 request: 00:16:09.876 { 00:16:09.876 "name": "TLSTEST", 00:16:09.876 "trtype": "tcp", 00:16:09.876 "traddr": "10.0.0.3", 00:16:09.876 "adrfam": "ipv4", 00:16:09.876 "trsvcid": "4420", 00:16:09.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:09.876 "prchk_reftag": false, 00:16:09.876 "prchk_guard": false, 00:16:09.876 "hdgst": false, 00:16:09.876 "ddgst": false, 00:16:09.876 "psk": "key0", 00:16:09.876 "allow_unrecognized_csi": false, 00:16:09.876 "method": "bdev_nvme_attach_controller", 00:16:09.876 "req_id": 1 00:16:09.876 } 00:16:09.876 Got JSON-RPC error response 00:16:09.876 response: 00:16:09.876 { 00:16:09.876 "code": -5, 00:16:09.876 "message": "Input/output error" 00:16:09.876 } 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72084 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72084 ']' 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72084 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72084 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:09.876 killing process with pid 72084 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72084' 00:16:09.876 Received shutdown signal, test time was about 10.000000 seconds 00:16:09.876 00:16:09.876 Latency(us) 00:16:09.876 [2024-11-26T20:44:04.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.876 [2024-11-26T20:44:04.869Z] =================================================================================================================== 00:16:09.876 [2024-11-26T20:44:04.869Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72084 00:16:09.876 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72084 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVAXMftipm 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVAXMftipm 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVAXMftipm 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vVAXMftipm 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72107 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72107 /var/tmp/bdevperf.sock 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72107 ']' 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.134 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.134 [2024-11-26 20:44:05.054260] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:10.134 [2024-11-26 20:44:05.054349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72107 ] 00:16:10.392 [2024-11-26 20:44:05.191702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.392 [2024-11-26 20:44:05.244257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.392 [2024-11-26 20:44:05.287631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:11.328 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.328 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:11.328 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vVAXMftipm 00:16:11.328 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:11.588 [2024-11-26 20:44:06.407634] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.588 [2024-11-26 20:44:06.412560] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:11.588 [2024-11-26 20:44:06.412596] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:11.588 [2024-11-26 20:44:06.412641] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:11.588 [2024-11-26 20:44:06.413024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1808ff0 (107): Transport endpoint is not connected 00:16:11.588 [2024-11-26 20:44:06.414014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1808ff0 (9): Bad file descriptor 00:16:11.588 [2024-11-26 20:44:06.415012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:11.588 [2024-11-26 20:44:06.415030] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:11.588 [2024-11-26 20:44:06.415039] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:11.588 [2024-11-26 20:44:06.415053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:11.588 request: 00:16:11.588 { 00:16:11.588 "name": "TLSTEST", 00:16:11.588 "trtype": "tcp", 00:16:11.588 "traddr": "10.0.0.3", 00:16:11.588 "adrfam": "ipv4", 00:16:11.588 "trsvcid": "4420", 00:16:11.588 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:11.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.588 "prchk_reftag": false, 00:16:11.588 "prchk_guard": false, 00:16:11.588 "hdgst": false, 00:16:11.588 "ddgst": false, 00:16:11.588 "psk": "key0", 00:16:11.588 "allow_unrecognized_csi": false, 00:16:11.588 "method": "bdev_nvme_attach_controller", 00:16:11.588 "req_id": 1 00:16:11.588 } 00:16:11.588 Got JSON-RPC error response 00:16:11.588 response: 00:16:11.588 { 00:16:11.588 "code": -5, 00:16:11.588 "message": "Input/output error" 00:16:11.588 } 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72107 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72107 ']' 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72107 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72107 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:11.588 killing process with pid 72107 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72107' 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72107 00:16:11.588 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.588 00:16:11.588 Latency(us) 00:16:11.588 [2024-11-26T20:44:06.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.588 [2024-11-26T20:44:06.581Z] =================================================================================================================== 00:16:11.588 [2024-11-26T20:44:06.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.588 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72107 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72141 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72141 /var/tmp/bdevperf.sock 00:16:11.847 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72141 ']' 00:16:11.848 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.848 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.848 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.848 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.848 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.848 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:11.848 [2024-11-26 20:44:06.703288] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:11.848 [2024-11-26 20:44:06.703378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72141 ] 00:16:12.105 [2024-11-26 20:44:06.848938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.105 [2024-11-26 20:44:06.901176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.105 [2024-11-26 20:44:06.945202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:12.671 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.671 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:12.671 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:12.929 [2024-11-26 20:44:07.824272] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:12.929 [2024-11-26 20:44:07.824321] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:12.929 request: 00:16:12.929 { 00:16:12.929 "name": "key0", 00:16:12.929 "path": "", 00:16:12.929 "method": "keyring_file_add_key", 00:16:12.929 "req_id": 1 00:16:12.929 } 00:16:12.929 Got JSON-RPC error response 00:16:12.929 response: 00:16:12.929 { 00:16:12.929 "code": -1, 00:16:12.929 "message": "Operation not permitted" 00:16:12.929 } 00:16:12.929 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:13.188 [2024-11-26 20:44:08.092451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:13.188 [2024-11-26 20:44:08.092521] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:13.188 request: 00:16:13.188 { 00:16:13.188 "name": "TLSTEST", 00:16:13.188 "trtype": "tcp", 00:16:13.188 "traddr": "10.0.0.3", 00:16:13.188 "adrfam": "ipv4", 00:16:13.188 "trsvcid": "4420", 00:16:13.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.188 "prchk_reftag": false, 00:16:13.188 "prchk_guard": false, 00:16:13.188 "hdgst": false, 00:16:13.188 "ddgst": false, 00:16:13.188 "psk": "key0", 00:16:13.188 "allow_unrecognized_csi": false, 00:16:13.188 "method": "bdev_nvme_attach_controller", 00:16:13.188 "req_id": 1 00:16:13.188 } 00:16:13.188 Got JSON-RPC error response 00:16:13.188 response: 00:16:13.188 { 00:16:13.188 "code": -126, 00:16:13.188 "message": "Required key not available" 00:16:13.188 } 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72141 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72141 ']' 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72141 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72141 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:13.188 killing process with pid 72141 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72141' 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72141 00:16:13.188 Received shutdown signal, test time was about 10.000000 seconds 00:16:13.188 00:16:13.188 Latency(us) 00:16:13.188 [2024-11-26T20:44:08.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.188 [2024-11-26T20:44:08.181Z] =================================================================================================================== 00:16:13.188 [2024-11-26T20:44:08.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:13.188 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72141 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71675 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71675 ']' 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71675 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71675 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:13.447 killing process with pid 71675 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71675' 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71675 00:16:13.447 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71675 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.MAF01qfXiS 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.MAF01qfXiS 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:13.706 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72185 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72185 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72185 ']' 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.965 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.965 [2024-11-26 20:44:08.767940] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:13.965 [2024-11-26 20:44:08.768050] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.965 [2024-11-26 20:44:08.923011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.224 [2024-11-26 20:44:08.986466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.224 [2024-11-26 20:44:08.986525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.224 [2024-11-26 20:44:08.986536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.224 [2024-11-26 20:44:08.986545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.224 [2024-11-26 20:44:08.986552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.224 [2024-11-26 20:44:08.986922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.224 [2024-11-26 20:44:09.064179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.MAF01qfXiS 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MAF01qfXiS 00:16:15.158 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:15.158 [2024-11-26 20:44:10.024974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.158 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:15.416 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:15.675 [2024-11-26 20:44:10.501036] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:15.675 [2024-11-26 20:44:10.501285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:15.675 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:15.933 malloc0 00:16:15.933 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:16.191 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MAF01qfXiS 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MAF01qfXiS 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72241 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72241 /var/tmp/bdevperf.sock 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72241 ']' 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.449 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.707 [2024-11-26 20:44:11.458220] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:16.707 [2024-11-26 20:44:11.458350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72241 ] 00:16:16.707 [2024-11-26 20:44:11.608407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.707 [2024-11-26 20:44:11.657843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.965 [2024-11-26 20:44:11.701722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.532 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.532 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:17.532 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:17.791 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:18.050 [2024-11-26 20:44:12.850389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.050 TLSTESTn1 00:16:18.050 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:18.050 Running I/O for 10 seconds... 00:16:20.381 5734.00 IOPS, 22.40 MiB/s [2024-11-26T20:44:16.311Z] 5750.00 IOPS, 22.46 MiB/s [2024-11-26T20:44:17.247Z] 5743.00 IOPS, 22.43 MiB/s [2024-11-26T20:44:18.182Z] 5723.25 IOPS, 22.36 MiB/s [2024-11-26T20:44:19.118Z] 5699.60 IOPS, 22.26 MiB/s [2024-11-26T20:44:20.059Z] 5706.00 IOPS, 22.29 MiB/s [2024-11-26T20:44:21.433Z] 5711.43 IOPS, 22.31 MiB/s [2024-11-26T20:44:22.383Z] 5705.88 IOPS, 22.29 MiB/s [2024-11-26T20:44:23.336Z] 5702.33 IOPS, 22.27 MiB/s [2024-11-26T20:44:23.336Z] 5698.50 IOPS, 22.26 MiB/s 00:16:28.343 Latency(us) 00:16:28.343 [2024-11-26T20:44:23.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.343 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:28.343 Verification LBA range: start 0x0 length 0x2000 00:16:28.343 TLSTESTn1 : 10.01 5704.47 22.28 0.00 0.00 22403.83 4181.82 17476.27 00:16:28.343 [2024-11-26T20:44:23.336Z] =================================================================================================================== 00:16:28.343 [2024-11-26T20:44:23.336Z] Total : 5704.47 22.28 0.00 0.00 22403.83 4181.82 17476.27 00:16:28.343 { 00:16:28.343 "results": [ 00:16:28.343 { 00:16:28.343 "job": "TLSTESTn1", 00:16:28.343 "core_mask": "0x4", 00:16:28.343 "workload": "verify", 00:16:28.343 "status": "finished", 00:16:28.343 "verify_range": { 00:16:28.343 "start": 0, 00:16:28.343 "length": 8192 00:16:28.343 }, 00:16:28.343 "queue_depth": 128, 00:16:28.343 "io_size": 4096, 00:16:28.343 "runtime": 10.011624, 00:16:28.343 "iops": 5704.469125088996, 00:16:28.343 "mibps": 22.283082519878892, 00:16:28.343 "io_failed": 0, 00:16:28.343 "io_timeout": 0, 00:16:28.343 "avg_latency_us": 22403.827913261644, 00:16:28.343 "min_latency_us": 4181.820952380953, 00:16:28.343 "max_latency_us": 17476.266666666666 00:16:28.343 } 00:16:28.343 ], 00:16:28.343 "core_count": 1 00:16:28.343 } 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72241 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72241 ']' 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72241 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72241 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:28.343 killing process with pid 72241 00:16:28.343 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72241' 00:16:28.343 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.343 00:16:28.343 Latency(us) 00:16:28.343 [2024-11-26T20:44:23.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.344 [2024-11-26T20:44:23.337Z] =================================================================================================================== 00:16:28.344 [2024-11-26T20:44:23.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72241 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72241 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.MAF01qfXiS 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MAF01qfXiS 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MAF01qfXiS 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MAF01qfXiS 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MAF01qfXiS 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72378 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72378 /var/tmp/bdevperf.sock 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72378 ']' 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.344 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.602 [2024-11-26 20:44:23.363849] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:28.602 [2024-11-26 20:44:23.363963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72378 ] 00:16:28.602 [2024-11-26 20:44:23.509261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.602 [2024-11-26 20:44:23.559264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.862 [2024-11-26 20:44:23.602708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.428 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.428 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.428 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:29.686 [2024-11-26 20:44:24.518331] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MAF01qfXiS': 0100666 00:16:29.686 [2024-11-26 20:44:24.518378] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:29.686 request: 00:16:29.686 { 00:16:29.686 "name": "key0", 00:16:29.686 "path": "/tmp/tmp.MAF01qfXiS", 00:16:29.686 "method": "keyring_file_add_key", 00:16:29.686 "req_id": 1 00:16:29.686 } 00:16:29.686 Got JSON-RPC error response 00:16:29.686 response: 00:16:29.686 { 00:16:29.686 "code": -1, 00:16:29.686 "message": "Operation not permitted" 00:16:29.686 } 00:16:29.686 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:29.944 [2024-11-26 20:44:24.770483] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.944 [2024-11-26 20:44:24.770541] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:29.944 request: 00:16:29.944 { 00:16:29.944 "name": "TLSTEST", 00:16:29.944 "trtype": "tcp", 00:16:29.944 "traddr": "10.0.0.3", 00:16:29.944 "adrfam": "ipv4", 00:16:29.944 "trsvcid": "4420", 00:16:29.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.944 "prchk_reftag": false, 00:16:29.944 "prchk_guard": false, 00:16:29.944 "hdgst": false, 00:16:29.944 "ddgst": false, 00:16:29.944 "psk": "key0", 00:16:29.944 "allow_unrecognized_csi": false, 00:16:29.944 "method": "bdev_nvme_attach_controller", 00:16:29.944 "req_id": 1 00:16:29.945 } 00:16:29.945 Got JSON-RPC error response 00:16:29.945 response: 00:16:29.945 { 00:16:29.945 "code": -126, 00:16:29.945 "message": "Required key not available" 00:16:29.945 } 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72378 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72378 ']' 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72378 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72378 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:29.945 killing process with pid 72378 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72378' 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72378 00:16:29.945 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.945 00:16:29.945 Latency(us) 00:16:29.945 [2024-11-26T20:44:24.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.945 [2024-11-26T20:44:24.938Z] =================================================================================================================== 00:16:29.945 [2024-11-26T20:44:24.938Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:29.945 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72378 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72185 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72185 ']' 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72185 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72185 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:30.203 killing process with pid 72185 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72185' 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72185 00:16:30.203 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72185 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72412 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72412 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72412 ']' 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.462 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.462 [2024-11-26 20:44:25.378411] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:30.462 [2024-11-26 20:44:25.378491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.720 [2024-11-26 20:44:25.515478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.720 [2024-11-26 20:44:25.580422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.720 [2024-11-26 20:44:25.580483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.720 [2024-11-26 20:44:25.580493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.720 [2024-11-26 20:44:25.580502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.720 [2024-11-26 20:44:25.580509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.720 [2024-11-26 20:44:25.580814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.720 [2024-11-26 20:44:25.658768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:31.287 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.287 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:31.287 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:31.287 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:31.287 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.MAF01qfXiS 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.MAF01qfXiS 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.MAF01qfXiS 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MAF01qfXiS 00:16:31.546 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:31.804 [2024-11-26 20:44:26.544505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.804 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:32.063 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:32.063 [2024-11-26 20:44:27.032622] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:32.063 [2024-11-26 20:44:27.032868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.063 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:32.322 malloc0 00:16:32.322 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:32.581 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:32.840 [2024-11-26 20:44:27.638778] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MAF01qfXiS': 0100666 00:16:32.840 [2024-11-26 20:44:27.638828] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:32.840 request: 00:16:32.840 { 00:16:32.840 "name": "key0", 00:16:32.840 "path": "/tmp/tmp.MAF01qfXiS", 00:16:32.840 "method": "keyring_file_add_key", 00:16:32.840 "req_id": 1 00:16:32.840 } 00:16:32.840 Got JSON-RPC error response 00:16:32.840 response: 00:16:32.840 { 00:16:32.840 "code": -1, 00:16:32.840 "message": "Operation not permitted" 00:16:32.840 } 00:16:32.840 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:33.100 [2024-11-26 20:44:27.846837] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:33.100 [2024-11-26 20:44:27.846903] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:33.100 request: 00:16:33.100 { 00:16:33.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.100 "host": "nqn.2016-06.io.spdk:host1", 00:16:33.100 "psk": "key0", 00:16:33.100 "method": "nvmf_subsystem_add_host", 00:16:33.100 "req_id": 1 00:16:33.100 } 00:16:33.100 Got JSON-RPC error response 00:16:33.100 response: 00:16:33.100 { 00:16:33.100 "code": -32603, 00:16:33.100 "message": "Internal error" 00:16:33.100 } 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72412 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72412 ']' 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72412 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72412 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:33.100 killing process with pid 72412 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72412' 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72412 00:16:33.100 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72412 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.MAF01qfXiS 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72481 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72481 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72481 ']' 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.360 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.360 [2024-11-26 20:44:28.236740] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:33.360 [2024-11-26 20:44:28.236822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.619 [2024-11-26 20:44:28.378290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.619 [2024-11-26 20:44:28.442785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.619 [2024-11-26 20:44:28.442845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.619 [2024-11-26 20:44:28.442855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.619 [2024-11-26 20:44:28.442864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.619 [2024-11-26 20:44:28.442872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.619 [2024-11-26 20:44:28.443245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.619 [2024-11-26 20:44:28.521431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.MAF01qfXiS 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MAF01qfXiS 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:34.554 [2024-11-26 20:44:29.482685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.554 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:34.822 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:35.098 [2024-11-26 20:44:29.958812] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:35.098 [2024-11-26 20:44:29.959069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:35.098 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:35.356 malloc0 00:16:35.356 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:35.614 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:35.872 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72537 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72537 /var/tmp/bdevperf.sock 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72537 ']' 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.131 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.131 [2024-11-26 20:44:31.091752] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:36.131 [2024-11-26 20:44:31.091839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72537 ] 00:16:36.389 [2024-11-26 20:44:31.231981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.389 [2024-11-26 20:44:31.284971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.389 [2024-11-26 20:44:31.328680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:37.328 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.328 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:37.328 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:37.328 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:37.588 [2024-11-26 20:44:32.449445] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.588 TLSTESTn1 00:16:37.588 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:38.155 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:38.155 "subsystems": [ 00:16:38.155 { 00:16:38.155 "subsystem": "keyring", 00:16:38.155 "config": [ 00:16:38.155 { 00:16:38.155 "method": "keyring_file_add_key", 00:16:38.155 "params": { 00:16:38.155 "name": "key0", 00:16:38.155 "path": "/tmp/tmp.MAF01qfXiS" 00:16:38.155 } 00:16:38.155 } 00:16:38.155 ] 00:16:38.155 }, 00:16:38.155 { 00:16:38.155 "subsystem": "iobuf", 00:16:38.155 "config": [ 00:16:38.155 { 00:16:38.155 "method": "iobuf_set_options", 00:16:38.155 "params": { 00:16:38.155 "small_pool_count": 8192, 00:16:38.155 "large_pool_count": 1024, 00:16:38.155 "small_bufsize": 8192, 00:16:38.155 "large_bufsize": 135168, 00:16:38.155 "enable_numa": false 00:16:38.155 } 00:16:38.155 } 00:16:38.155 ] 00:16:38.155 }, 00:16:38.155 { 00:16:38.155 "subsystem": "sock", 00:16:38.155 "config": [ 00:16:38.155 { 00:16:38.155 "method": "sock_set_default_impl", 00:16:38.155 "params": { 00:16:38.155 "impl_name": "uring" 00:16:38.155 } 00:16:38.155 }, 00:16:38.155 { 00:16:38.155 "method": "sock_impl_set_options", 00:16:38.155 "params": { 00:16:38.155 "impl_name": "ssl", 00:16:38.155 "recv_buf_size": 4096, 00:16:38.155 "send_buf_size": 4096, 00:16:38.155 "enable_recv_pipe": true, 00:16:38.155 "enable_quickack": false, 00:16:38.155 "enable_placement_id": 0, 00:16:38.155 "enable_zerocopy_send_server": true, 00:16:38.155 "enable_zerocopy_send_client": false, 00:16:38.155 "zerocopy_threshold": 0, 00:16:38.155 "tls_version": 0, 00:16:38.155 "enable_ktls": false 00:16:38.155 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "sock_impl_set_options", 00:16:38.156 "params": { 00:16:38.156 "impl_name": "posix", 00:16:38.156 "recv_buf_size": 2097152, 00:16:38.156 "send_buf_size": 2097152, 00:16:38.156 "enable_recv_pipe": true, 00:16:38.156 "enable_quickack": false, 00:16:38.156 "enable_placement_id": 0, 00:16:38.156 "enable_zerocopy_send_server": true, 00:16:38.156 "enable_zerocopy_send_client": false, 00:16:38.156 "zerocopy_threshold": 0, 00:16:38.156 "tls_version": 0, 00:16:38.156 "enable_ktls": false 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "sock_impl_set_options", 00:16:38.156 "params": { 00:16:38.156 "impl_name": "uring", 00:16:38.156 "recv_buf_size": 2097152, 00:16:38.156 "send_buf_size": 2097152, 00:16:38.156 "enable_recv_pipe": true, 00:16:38.156 "enable_quickack": false, 00:16:38.156 "enable_placement_id": 0, 00:16:38.156 "enable_zerocopy_send_server": false, 00:16:38.156 "enable_zerocopy_send_client": false, 00:16:38.156 "zerocopy_threshold": 0, 00:16:38.156 "tls_version": 0, 00:16:38.156 "enable_ktls": false 00:16:38.156 } 00:16:38.156 } 00:16:38.156 ] 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "subsystem": "vmd", 00:16:38.156 "config": [] 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "subsystem": "accel", 00:16:38.156 "config": [ 00:16:38.156 { 00:16:38.156 "method": "accel_set_options", 00:16:38.156 "params": { 00:16:38.156 "small_cache_size": 128, 00:16:38.156 "large_cache_size": 16, 00:16:38.156 "task_count": 2048, 00:16:38.156 "sequence_count": 2048, 00:16:38.156 "buf_count": 2048 00:16:38.156 } 00:16:38.156 } 00:16:38.156 ] 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "subsystem": "bdev", 00:16:38.156 "config": [ 00:16:38.156 { 00:16:38.156 "method": "bdev_set_options", 00:16:38.156 "params": { 00:16:38.156 "bdev_io_pool_size": 65535, 00:16:38.156 "bdev_io_cache_size": 256, 00:16:38.156 "bdev_auto_examine": true, 00:16:38.156 "iobuf_small_cache_size": 128, 00:16:38.156 "iobuf_large_cache_size": 16 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "bdev_raid_set_options", 00:16:38.156 "params": { 00:16:38.156 "process_window_size_kb": 1024, 00:16:38.156 "process_max_bandwidth_mb_sec": 0 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "bdev_iscsi_set_options", 00:16:38.156 "params": { 00:16:38.156 "timeout_sec": 30 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "bdev_nvme_set_options", 00:16:38.156 "params": { 00:16:38.156 "action_on_timeout": "none", 00:16:38.156 "timeout_us": 0, 00:16:38.156 "timeout_admin_us": 0, 00:16:38.156 "keep_alive_timeout_ms": 10000, 00:16:38.156 "arbitration_burst": 0, 00:16:38.156 "low_priority_weight": 0, 00:16:38.156 "medium_priority_weight": 0, 00:16:38.156 "high_priority_weight": 0, 00:16:38.156 "nvme_adminq_poll_period_us": 10000, 00:16:38.156 "nvme_ioq_poll_period_us": 0, 00:16:38.156 "io_queue_requests": 0, 00:16:38.156 "delay_cmd_submit": true, 00:16:38.156 "transport_retry_count": 4, 00:16:38.156 "bdev_retry_count": 3, 00:16:38.156 "transport_ack_timeout": 0, 00:16:38.156 "ctrlr_loss_timeout_sec": 0, 00:16:38.156 "reconnect_delay_sec": 0, 00:16:38.156 "fast_io_fail_timeout_sec": 0, 00:16:38.156 "disable_auto_failback": false, 00:16:38.156 "generate_uuids": false, 00:16:38.156 "transport_tos": 0, 00:16:38.156 "nvme_error_stat": false, 00:16:38.156 "rdma_srq_size": 0, 00:16:38.156 "io_path_stat": false, 00:16:38.156 "allow_accel_sequence": false, 00:16:38.156 "rdma_max_cq_size": 0, 00:16:38.156 "rdma_cm_event_timeout_ms": 0, 00:16:38.156 "dhchap_digests": [ 00:16:38.156 "sha256", 00:16:38.156 "sha384", 00:16:38.156 "sha512" 00:16:38.156 ], 00:16:38.156 "dhchap_dhgroups": [ 00:16:38.156 "null", 00:16:38.156 "ffdhe2048", 00:16:38.156 "ffdhe3072", 00:16:38.156 "ffdhe4096", 00:16:38.156 "ffdhe6144", 00:16:38.156 "ffdhe8192" 00:16:38.156 ] 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "bdev_nvme_set_hotplug", 00:16:38.156 "params": { 00:16:38.156 "period_us": 100000, 00:16:38.156 "enable": false 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "bdev_malloc_create", 00:16:38.156 "params": { 00:16:38.156 "name": "malloc0", 00:16:38.156 "num_blocks": 8192, 00:16:38.156 "block_size": 4096, 00:16:38.156 "physical_block_size": 4096, 00:16:38.156 "uuid": "3cfa1985-91fb-420f-81c8-d10d25ec33ef", 00:16:38.156 "optimal_io_boundary": 0, 00:16:38.156 "md_size": 0, 00:16:38.156 "dif_type": 0, 00:16:38.156 "dif_is_head_of_md": false, 00:16:38.156 "dif_pi_format": 0 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "bdev_wait_for_examine" 00:16:38.156 } 00:16:38.156 ] 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "subsystem": "nbd", 00:16:38.156 "config": [] 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "subsystem": "scheduler", 00:16:38.156 "config": [ 00:16:38.156 { 00:16:38.156 "method": "framework_set_scheduler", 00:16:38.156 "params": { 00:16:38.156 "name": "static" 00:16:38.156 } 00:16:38.156 } 00:16:38.156 ] 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "subsystem": "nvmf", 00:16:38.156 "config": [ 00:16:38.156 { 00:16:38.156 "method": "nvmf_set_config", 00:16:38.156 "params": { 00:16:38.156 "discovery_filter": "match_any", 00:16:38.156 "admin_cmd_passthru": { 00:16:38.156 "identify_ctrlr": false 00:16:38.156 }, 00:16:38.156 "dhchap_digests": [ 00:16:38.156 "sha256", 00:16:38.156 "sha384", 00:16:38.156 "sha512" 00:16:38.156 ], 00:16:38.156 "dhchap_dhgroups": [ 00:16:38.156 "null", 00:16:38.156 "ffdhe2048", 00:16:38.156 "ffdhe3072", 00:16:38.156 "ffdhe4096", 00:16:38.156 "ffdhe6144", 00:16:38.156 "ffdhe8192" 00:16:38.156 ] 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "nvmf_set_max_subsystems", 00:16:38.156 "params": { 00:16:38.156 "max_subsystems": 1024 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "nvmf_set_crdt", 00:16:38.156 "params": { 00:16:38.156 "crdt1": 0, 00:16:38.156 "crdt2": 0, 00:16:38.156 "crdt3": 0 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "nvmf_create_transport", 00:16:38.156 "params": { 00:16:38.156 "trtype": "TCP", 00:16:38.156 "max_queue_depth": 128, 00:16:38.156 "max_io_qpairs_per_ctrlr": 127, 00:16:38.156 "in_capsule_data_size": 4096, 00:16:38.156 "max_io_size": 131072, 00:16:38.156 "io_unit_size": 131072, 00:16:38.156 "max_aq_depth": 128, 00:16:38.156 "num_shared_buffers": 511, 00:16:38.156 "buf_cache_size": 4294967295, 00:16:38.156 "dif_insert_or_strip": false, 00:16:38.156 "zcopy": false, 00:16:38.156 "c2h_success": false, 00:16:38.156 "sock_priority": 0, 00:16:38.156 "abort_timeout_sec": 1, 00:16:38.156 "ack_timeout": 0, 00:16:38.156 "data_wr_pool_size": 0 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "nvmf_create_subsystem", 00:16:38.156 "params": { 00:16:38.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.156 "allow_any_host": false, 00:16:38.156 "serial_number": "SPDK00000000000001", 00:16:38.156 "model_number": "SPDK bdev Controller", 00:16:38.156 "max_namespaces": 10, 00:16:38.156 "min_cntlid": 1, 00:16:38.156 "max_cntlid": 65519, 00:16:38.156 "ana_reporting": false 00:16:38.156 } 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "method": "nvmf_subsystem_add_host", 00:16:38.156 "params": { 00:16:38.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.157 "host": "nqn.2016-06.io.spdk:host1", 00:16:38.157 "psk": "key0" 00:16:38.157 } 00:16:38.157 }, 00:16:38.157 { 00:16:38.157 "method": "nvmf_subsystem_add_ns", 00:16:38.157 "params": { 00:16:38.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.157 "namespace": { 00:16:38.157 "nsid": 1, 00:16:38.157 "bdev_name": "malloc0", 00:16:38.157 "nguid": "3CFA198591FB420F81C8D10D25EC33EF", 00:16:38.157 "uuid": "3cfa1985-91fb-420f-81c8-d10d25ec33ef", 00:16:38.157 "no_auto_visible": false 00:16:38.157 } 00:16:38.157 } 00:16:38.157 }, 00:16:38.157 { 00:16:38.157 "method": "nvmf_subsystem_add_listener", 00:16:38.157 "params": { 00:16:38.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.157 "listen_address": { 00:16:38.157 "trtype": "TCP", 00:16:38.157 "adrfam": "IPv4", 00:16:38.157 "traddr": "10.0.0.3", 00:16:38.157 "trsvcid": "4420" 00:16:38.157 }, 00:16:38.157 "secure_channel": true 00:16:38.157 } 00:16:38.157 } 00:16:38.157 ] 00:16:38.157 } 00:16:38.157 ] 00:16:38.157 }' 00:16:38.157 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:38.415 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:38.415 "subsystems": [ 00:16:38.415 { 00:16:38.415 "subsystem": "keyring", 00:16:38.415 "config": [ 00:16:38.415 { 00:16:38.415 "method": "keyring_file_add_key", 00:16:38.415 "params": { 00:16:38.415 "name": "key0", 00:16:38.415 "path": "/tmp/tmp.MAF01qfXiS" 00:16:38.415 } 00:16:38.415 } 00:16:38.415 ] 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "subsystem": "iobuf", 00:16:38.415 "config": [ 00:16:38.415 { 00:16:38.415 "method": "iobuf_set_options", 00:16:38.415 "params": { 00:16:38.415 "small_pool_count": 8192, 00:16:38.415 "large_pool_count": 1024, 00:16:38.415 "small_bufsize": 8192, 00:16:38.415 "large_bufsize": 135168, 00:16:38.415 "enable_numa": false 00:16:38.415 } 00:16:38.415 } 00:16:38.415 ] 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "subsystem": "sock", 00:16:38.415 "config": [ 00:16:38.415 { 00:16:38.415 "method": "sock_set_default_impl", 00:16:38.415 "params": { 00:16:38.415 "impl_name": "uring" 00:16:38.415 } 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "method": "sock_impl_set_options", 00:16:38.415 "params": { 00:16:38.415 "impl_name": "ssl", 00:16:38.415 "recv_buf_size": 4096, 00:16:38.415 "send_buf_size": 4096, 00:16:38.415 "enable_recv_pipe": true, 00:16:38.415 "enable_quickack": false, 00:16:38.415 "enable_placement_id": 0, 00:16:38.415 "enable_zerocopy_send_server": true, 00:16:38.415 "enable_zerocopy_send_client": false, 00:16:38.415 "zerocopy_threshold": 0, 00:16:38.415 "tls_version": 0, 00:16:38.415 "enable_ktls": false 00:16:38.415 } 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "method": "sock_impl_set_options", 00:16:38.415 "params": { 00:16:38.415 "impl_name": "posix", 00:16:38.415 "recv_buf_size": 2097152, 00:16:38.415 "send_buf_size": 2097152, 00:16:38.415 "enable_recv_pipe": true, 00:16:38.415 "enable_quickack": false, 00:16:38.415 "enable_placement_id": 0, 00:16:38.415 "enable_zerocopy_send_server": true, 00:16:38.415 "enable_zerocopy_send_client": false, 00:16:38.415 "zerocopy_threshold": 0, 00:16:38.415 "tls_version": 0, 00:16:38.415 "enable_ktls": false 00:16:38.415 } 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "method": "sock_impl_set_options", 00:16:38.415 "params": { 00:16:38.415 "impl_name": "uring", 00:16:38.415 "recv_buf_size": 2097152, 00:16:38.415 "send_buf_size": 2097152, 00:16:38.415 "enable_recv_pipe": true, 00:16:38.415 "enable_quickack": false, 00:16:38.415 "enable_placement_id": 0, 00:16:38.415 "enable_zerocopy_send_server": false, 00:16:38.415 "enable_zerocopy_send_client": false, 00:16:38.415 "zerocopy_threshold": 0, 00:16:38.415 "tls_version": 0, 00:16:38.415 "enable_ktls": false 00:16:38.415 } 00:16:38.415 } 00:16:38.415 ] 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "subsystem": "vmd", 00:16:38.415 "config": [] 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "subsystem": "accel", 00:16:38.415 "config": [ 00:16:38.415 { 00:16:38.415 "method": "accel_set_options", 00:16:38.415 "params": { 00:16:38.415 "small_cache_size": 128, 00:16:38.415 "large_cache_size": 16, 00:16:38.415 "task_count": 2048, 00:16:38.415 "sequence_count": 2048, 00:16:38.415 "buf_count": 2048 00:16:38.415 } 00:16:38.415 } 00:16:38.415 ] 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "subsystem": "bdev", 00:16:38.415 "config": [ 00:16:38.415 { 00:16:38.415 "method": "bdev_set_options", 00:16:38.415 "params": { 00:16:38.415 "bdev_io_pool_size": 65535, 00:16:38.415 "bdev_io_cache_size": 256, 00:16:38.415 "bdev_auto_examine": true, 00:16:38.415 "iobuf_small_cache_size": 128, 00:16:38.415 "iobuf_large_cache_size": 16 00:16:38.415 } 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "method": "bdev_raid_set_options", 00:16:38.415 "params": { 00:16:38.415 "process_window_size_kb": 1024, 00:16:38.415 "process_max_bandwidth_mb_sec": 0 00:16:38.415 } 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "method": "bdev_iscsi_set_options", 00:16:38.415 "params": { 00:16:38.415 "timeout_sec": 30 00:16:38.415 } 00:16:38.415 }, 00:16:38.415 { 00:16:38.415 "method": "bdev_nvme_set_options", 00:16:38.415 "params": { 00:16:38.415 "action_on_timeout": "none", 00:16:38.415 "timeout_us": 0, 00:16:38.415 "timeout_admin_us": 0, 00:16:38.415 "keep_alive_timeout_ms": 10000, 00:16:38.415 "arbitration_burst": 0, 00:16:38.415 "low_priority_weight": 0, 00:16:38.415 "medium_priority_weight": 0, 00:16:38.415 "high_priority_weight": 0, 00:16:38.415 "nvme_adminq_poll_period_us": 10000, 00:16:38.415 "nvme_ioq_poll_period_us": 0, 00:16:38.415 "io_queue_requests": 512, 00:16:38.415 "delay_cmd_submit": true, 00:16:38.415 "transport_retry_count": 4, 00:16:38.415 "bdev_retry_count": 3, 00:16:38.415 "transport_ack_timeout": 0, 00:16:38.415 "ctrlr_loss_timeout_sec": 0, 00:16:38.415 "reconnect_delay_sec": 0, 00:16:38.415 "fast_io_fail_timeout_sec": 0, 00:16:38.415 "disable_auto_failback": false, 00:16:38.415 "generate_uuids": false, 00:16:38.415 "transport_tos": 0, 00:16:38.415 "nvme_error_stat": false, 00:16:38.415 "rdma_srq_size": 0, 00:16:38.415 "io_path_stat": false, 00:16:38.415 "allow_accel_sequence": false, 00:16:38.416 "rdma_max_cq_size": 0, 00:16:38.416 "rdma_cm_event_timeout_ms": 0, 00:16:38.416 "dhchap_digests": [ 00:16:38.416 "sha256", 00:16:38.416 "sha384", 00:16:38.416 "sha512" 00:16:38.416 ], 00:16:38.416 "dhchap_dhgroups": [ 00:16:38.416 "null", 00:16:38.416 "ffdhe2048", 00:16:38.416 "ffdhe3072", 00:16:38.416 "ffdhe4096", 00:16:38.416 "ffdhe6144", 00:16:38.416 "ffdhe8192" 00:16:38.416 ] 00:16:38.416 } 00:16:38.416 }, 00:16:38.416 { 00:16:38.416 "method": "bdev_nvme_attach_controller", 00:16:38.416 "params": { 00:16:38.416 "name": "TLSTEST", 00:16:38.416 "trtype": "TCP", 00:16:38.416 "adrfam": "IPv4", 00:16:38.416 "traddr": "10.0.0.3", 00:16:38.416 "trsvcid": "4420", 00:16:38.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.416 "prchk_reftag": false, 00:16:38.416 "prchk_guard": false, 00:16:38.416 "ctrlr_loss_timeout_sec": 0, 00:16:38.416 "reconnect_delay_sec": 0, 00:16:38.416 "fast_io_fail_timeout_sec": 0, 00:16:38.416 "psk": "key0", 00:16:38.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:38.416 "hdgst": false, 00:16:38.416 "ddgst": false, 00:16:38.416 "multipath": "multipath" 00:16:38.416 } 00:16:38.416 }, 00:16:38.416 { 00:16:38.416 "method": "bdev_nvme_set_hotplug", 00:16:38.416 "params": { 00:16:38.416 "period_us": 100000, 00:16:38.416 "enable": false 00:16:38.416 } 00:16:38.416 }, 00:16:38.416 { 00:16:38.416 "method": "bdev_wait_for_examine" 00:16:38.416 } 00:16:38.416 ] 00:16:38.416 }, 00:16:38.416 { 00:16:38.416 "subsystem": "nbd", 00:16:38.416 "config": [] 00:16:38.416 } 00:16:38.416 ] 00:16:38.416 }' 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72537 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72537 ']' 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72537 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72537 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:38.416 killing process with pid 72537 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72537' 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72537 00:16:38.416 Received shutdown signal, test time was about 10.000000 seconds 00:16:38.416 00:16:38.416 Latency(us) 00:16:38.416 [2024-11-26T20:44:33.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.416 [2024-11-26T20:44:33.409Z] =================================================================================================================== 00:16:38.416 [2024-11-26T20:44:33.409Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72537 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72481 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72481 ']' 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72481 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.416 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72481 00:16:38.675 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:38.675 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:38.675 killing process with pid 72481 00:16:38.675 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72481' 00:16:38.675 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72481 00:16:38.675 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72481 00:16:38.934 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:38.934 "subsystems": [ 00:16:38.934 { 00:16:38.934 "subsystem": "keyring", 00:16:38.934 "config": [ 00:16:38.934 { 00:16:38.934 "method": "keyring_file_add_key", 00:16:38.934 "params": { 00:16:38.934 "name": "key0", 00:16:38.934 "path": "/tmp/tmp.MAF01qfXiS" 00:16:38.934 } 00:16:38.934 } 00:16:38.934 ] 00:16:38.934 }, 00:16:38.934 { 00:16:38.934 "subsystem": "iobuf", 00:16:38.934 "config": [ 00:16:38.934 { 00:16:38.934 "method": "iobuf_set_options", 00:16:38.934 "params": { 00:16:38.934 "small_pool_count": 8192, 00:16:38.934 "large_pool_count": 1024, 00:16:38.934 "small_bufsize": 8192, 00:16:38.934 "large_bufsize": 135168, 00:16:38.934 "enable_numa": false 00:16:38.934 } 00:16:38.934 } 00:16:38.934 ] 00:16:38.934 }, 00:16:38.934 { 00:16:38.934 "subsystem": "sock", 00:16:38.934 "config": [ 00:16:38.934 { 00:16:38.934 "method": "sock_set_default_impl", 00:16:38.934 "params": { 00:16:38.934 "impl_name": "uring" 00:16:38.934 } 00:16:38.934 }, 00:16:38.934 { 00:16:38.934 "method": "sock_impl_set_options", 00:16:38.934 "params": { 00:16:38.934 "impl_name": "ssl", 00:16:38.934 "recv_buf_size": 4096, 00:16:38.934 "send_buf_size": 4096, 00:16:38.934 "enable_recv_pipe": true, 00:16:38.934 "enable_quickack": false, 00:16:38.934 "enable_placement_id": 0, 00:16:38.934 "enable_zerocopy_send_server": true, 00:16:38.934 "enable_zerocopy_send_client": false, 00:16:38.934 "zerocopy_threshold": 0, 00:16:38.934 "tls_version": 0, 00:16:38.934 "enable_ktls": false 00:16:38.934 } 00:16:38.934 }, 00:16:38.934 { 00:16:38.934 "method": "sock_impl_set_options", 00:16:38.934 "params": { 00:16:38.934 "impl_name": "posix", 00:16:38.934 "recv_buf_size": 2097152, 00:16:38.934 "send_buf_size": 2097152, 00:16:38.934 "enable_recv_pipe": true, 00:16:38.934 "enable_quickack": false, 00:16:38.934 "enable_placement_id": 0, 00:16:38.934 "enable_zerocopy_send_server": true, 00:16:38.934 "enable_zerocopy_send_client": false, 00:16:38.934 "zerocopy_threshold": 0, 00:16:38.934 "tls_version": 0, 00:16:38.934 "enable_ktls": false 00:16:38.934 } 00:16:38.934 }, 00:16:38.934 { 00:16:38.934 "method": "sock_impl_set_options", 00:16:38.934 "params": { 00:16:38.934 "impl_name": "uring", 00:16:38.934 "recv_buf_size": 2097152, 00:16:38.934 "send_buf_size": 2097152, 00:16:38.934 "enable_recv_pipe": true, 00:16:38.934 "enable_quickack": false, 00:16:38.934 "enable_placement_id": 0, 00:16:38.934 "enable_zerocopy_send_server": false, 00:16:38.934 "enable_zerocopy_send_client": false, 00:16:38.934 "zerocopy_threshold": 0, 00:16:38.934 "tls_version": 0, 00:16:38.934 "enable_ktls": false 00:16:38.934 } 00:16:38.934 } 00:16:38.934 ] 00:16:38.934 }, 00:16:38.934 { 00:16:38.934 "subsystem": "vmd", 00:16:38.934 "config": [] 00:16:38.934 }, 00:16:38.934 { 00:16:38.934 "subsystem": "accel", 00:16:38.934 "config": [ 00:16:38.934 { 00:16:38.934 "method": "accel_set_options", 00:16:38.934 "params": { 00:16:38.935 "small_cache_size": 128, 00:16:38.935 "large_cache_size": 16, 00:16:38.935 "task_count": 2048, 00:16:38.935 "sequence_count": 2048, 00:16:38.935 "buf_count": 2048 00:16:38.935 } 00:16:38.935 } 00:16:38.935 ] 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "subsystem": "bdev", 00:16:38.935 "config": [ 00:16:38.935 { 00:16:38.935 "method": "bdev_set_options", 00:16:38.935 "params": { 00:16:38.935 "bdev_io_pool_size": 65535, 00:16:38.935 "bdev_io_cache_size": 256, 00:16:38.935 "bdev_auto_examine": true, 00:16:38.935 "iobuf_small_cache_size": 128, 00:16:38.935 "iobuf_large_cache_size": 16 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "bdev_raid_set_options", 00:16:38.935 "params": { 00:16:38.935 "process_window_size_kb": 1024, 00:16:38.935 "process_max_bandwidth_mb_sec": 0 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "bdev_iscsi_set_options", 00:16:38.935 "params": { 00:16:38.935 "timeout_sec": 30 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "bdev_nvme_set_options", 00:16:38.935 "params": { 00:16:38.935 "action_on_timeout": "none", 00:16:38.935 "timeout_us": 0, 00:16:38.935 "timeout_admin_us": 0, 00:16:38.935 "keep_alive_timeout_ms": 10000, 00:16:38.935 "arbitration_burst": 0, 00:16:38.935 "low_priority_weight": 0, 00:16:38.935 "medium_priority_weight": 0, 00:16:38.935 "high_priority_weight": 0, 00:16:38.935 "nvme_adminq_poll_period_us": 10000, 00:16:38.935 "nvme_ioq_poll_period_us": 0, 00:16:38.935 "io_queue_requests": 0, 00:16:38.935 "delay_cmd_submit": true, 00:16:38.935 "transport_retry_count": 4, 00:16:38.935 "bdev_retry_count": 3, 00:16:38.935 "transport_ack_timeout": 0, 00:16:38.935 "ctrlr_loss_timeout_sec": 0, 00:16:38.935 "reconnect_delay_sec": 0, 00:16:38.935 "fast_io_fail_timeout_sec": 0, 00:16:38.935 "disable_auto_failback": false, 00:16:38.935 "generate_uuids": false, 00:16:38.935 "transport_tos": 0, 00:16:38.935 "nvme_error_stat": false, 00:16:38.935 "rdma_srq_size": 0, 00:16:38.935 "io_path_stat": false, 00:16:38.935 "allow_accel_sequence": false, 00:16:38.935 "rdma_max_cq_size": 0, 00:16:38.935 "rdma_cm_event_timeout_ms": 0, 00:16:38.935 "dhchap_digests": [ 00:16:38.935 "sha256", 00:16:38.935 "sha384", 00:16:38.935 "sha512" 00:16:38.935 ], 00:16:38.935 "dhchap_dhgroups": [ 00:16:38.935 "null", 00:16:38.935 "ffdhe2048", 00:16:38.935 "ffdhe3072", 00:16:38.935 "ffdhe4096", 00:16:38.935 "ffdhe6144", 00:16:38.935 "ffdhe8192" 00:16:38.935 ] 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "bdev_nvme_set_hotplug", 00:16:38.935 "params": { 00:16:38.935 "period_us": 100000, 00:16:38.935 "enable": false 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "bdev_malloc_create", 00:16:38.935 "params": { 00:16:38.935 "name": "malloc0", 00:16:38.935 "num_blocks": 8192, 00:16:38.935 "block_size": 4096, 00:16:38.935 "physical_block_size": 4096, 00:16:38.935 "uuid": "3cfa1985-91fb-420f-81c8-d10d25ec33ef", 00:16:38.935 "optimal_io_boundary": 0, 00:16:38.935 "md_size": 0, 00:16:38.935 "dif_type": 0, 00:16:38.935 "dif_is_head_of_md": false, 00:16:38.935 "dif_pi_format": 0 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "bdev_wait_for_examine" 00:16:38.935 } 00:16:38.935 ] 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "subsystem": "nbd", 00:16:38.935 "config": [] 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "subsystem": "scheduler", 00:16:38.935 "config": [ 00:16:38.935 { 00:16:38.935 "method": "framework_set_scheduler", 00:16:38.935 "params": { 00:16:38.935 "name": "static" 00:16:38.935 } 00:16:38.935 } 00:16:38.935 ] 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "subsystem": "nvmf", 00:16:38.935 "config": [ 00:16:38.935 { 00:16:38.935 "method": "nvmf_set_config", 00:16:38.935 "params": { 00:16:38.935 "discovery_filter": "match_any", 00:16:38.935 "admin_cmd_passthru": { 00:16:38.935 "identify_ctrlr": false 00:16:38.935 }, 00:16:38.935 "dhchap_digests": [ 00:16:38.935 "sha256", 00:16:38.935 "sha384", 00:16:38.935 "sha512" 00:16:38.935 ], 00:16:38.935 "dhchap_dhgroups": [ 00:16:38.935 "null", 00:16:38.935 "ffdhe2048", 00:16:38.935 "ffdhe3072", 00:16:38.935 "ffdhe4096", 00:16:38.935 "ffdhe6144", 00:16:38.935 "ffdhe8192" 00:16:38.935 ] 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "nvmf_set_max_subsystems", 00:16:38.935 "params": { 00:16:38.935 "max_subsystems": 1024 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "nvmf_set_crdt", 00:16:38.935 "params": { 00:16:38.935 "crdt1": 0, 00:16:38.935 "crdt2": 0, 00:16:38.935 "crdt3": 0 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "nvmf_create_transport", 00:16:38.935 "params": { 00:16:38.935 "trtype": "TCP", 00:16:38.935 "max_queue_depth": 128, 00:16:38.935 "max_io_qpairs_per_ctrlr": 127, 00:16:38.935 "in_capsule_data_size": 4096, 00:16:38.935 "max_io_size": 131072, 00:16:38.935 "io_unit_size": 131072, 00:16:38.935 "max_aq_depth": 128, 00:16:38.935 "num_shared_buffers": 511, 00:16:38.935 "buf_cache_size": 4294967295, 00:16:38.935 "dif_insert_or_strip": false, 00:16:38.935 "zcopy": false, 00:16:38.935 "c2h_success": false, 00:16:38.935 "sock_priority": 0, 00:16:38.935 "abort_timeout_sec": 1, 00:16:38.935 "ack_timeout": 0, 00:16:38.935 "data_wr_pool_size": 0 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "nvmf_create_subsystem", 00:16:38.935 "params": { 00:16:38.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.935 "allow_any_host": false, 00:16:38.935 "serial_number": "SPDK00000000000001", 00:16:38.935 "model_number": "SPDK bdev Controller", 00:16:38.935 "max_namespaces": 10, 00:16:38.935 "min_cntlid": 1, 00:16:38.935 "max_cntlid": 65519, 00:16:38.935 "ana_reporting": false 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "nvmf_subsystem_add_host", 00:16:38.935 "params": { 00:16:38.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.935 "host": "nqn.2016-06.io.spdk:host1", 00:16:38.935 "psk": "key0" 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "nvmf_subsystem_add_ns", 00:16:38.935 "params": { 00:16:38.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.935 "namespace": { 00:16:38.935 "nsid": 1, 00:16:38.935 "bdev_name": "malloc0", 00:16:38.935 "nguid": "3CFA198591FB420F81C8D10D25EC33EF", 00:16:38.935 "uuid": "3cfa1985-91fb-420f-81c8-d10d25ec33ef", 00:16:38.935 "no_auto_visible": false 00:16:38.935 } 00:16:38.935 } 00:16:38.935 }, 00:16:38.935 { 00:16:38.935 "method": "nvmf_subsystem_add_listener", 00:16:38.935 "params": { 00:16:38.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.935 "listen_address": { 00:16:38.935 "trtype": "TCP", 00:16:38.935 "adrfam": "IPv4", 00:16:38.935 "traddr": "10.0.0.3", 00:16:38.935 "trsvcid": "4420" 00:16:38.935 }, 00:16:38.935 "secure_channel": true 00:16:38.935 } 00:16:38.935 } 00:16:38.935 ] 00:16:38.935 } 00:16:38.935 ] 00:16:38.935 }' 00:16:38.935 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:38.935 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:38.935 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.935 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.935 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72586 00:16:38.935 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72586 00:16:38.936 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72586 ']' 00:16:38.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.936 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.936 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.936 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.936 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.936 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.936 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:38.936 [2024-11-26 20:44:33.766625] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:38.936 [2024-11-26 20:44:33.766735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.936 [2024-11-26 20:44:33.918125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.195 [2024-11-26 20:44:33.980513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.195 [2024-11-26 20:44:33.980574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.195 [2024-11-26 20:44:33.980585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.195 [2024-11-26 20:44:33.980594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.195 [2024-11-26 20:44:33.980602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.195 [2024-11-26 20:44:33.980950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.195 [2024-11-26 20:44:34.172003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:39.454 [2024-11-26 20:44:34.267766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.454 [2024-11-26 20:44:34.299722] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:39.454 [2024-11-26 20:44:34.299956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72618 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72618 /var/tmp/bdevperf.sock 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72618 ']' 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.021 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.022 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.022 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.022 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:40.022 "subsystems": [ 00:16:40.022 { 00:16:40.022 "subsystem": "keyring", 00:16:40.022 "config": [ 00:16:40.022 { 00:16:40.022 "method": "keyring_file_add_key", 00:16:40.022 "params": { 00:16:40.022 "name": "key0", 00:16:40.022 "path": "/tmp/tmp.MAF01qfXiS" 00:16:40.022 } 00:16:40.022 } 00:16:40.022 ] 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "subsystem": "iobuf", 00:16:40.022 "config": [ 00:16:40.022 { 00:16:40.022 "method": "iobuf_set_options", 00:16:40.022 "params": { 00:16:40.022 "small_pool_count": 8192, 00:16:40.022 "large_pool_count": 1024, 00:16:40.022 "small_bufsize": 8192, 00:16:40.022 "large_bufsize": 135168, 00:16:40.022 "enable_numa": false 00:16:40.022 } 00:16:40.022 } 00:16:40.022 ] 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "subsystem": "sock", 00:16:40.022 "config": [ 00:16:40.022 { 00:16:40.022 "method": "sock_set_default_impl", 00:16:40.022 "params": { 00:16:40.022 "impl_name": "uring" 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "sock_impl_set_options", 00:16:40.022 "params": { 00:16:40.022 "impl_name": "ssl", 00:16:40.022 "recv_buf_size": 4096, 00:16:40.022 "send_buf_size": 4096, 00:16:40.022 "enable_recv_pipe": true, 00:16:40.022 "enable_quickack": false, 00:16:40.022 "enable_placement_id": 0, 00:16:40.022 "enable_zerocopy_send_server": true, 00:16:40.022 "enable_zerocopy_send_client": false, 00:16:40.022 "zerocopy_threshold": 0, 00:16:40.022 "tls_version": 0, 00:16:40.022 "enable_ktls": false 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "sock_impl_set_options", 00:16:40.022 "params": { 00:16:40.022 "impl_name": "posix", 00:16:40.022 "recv_buf_size": 2097152, 00:16:40.022 "send_buf_size": 2097152, 00:16:40.022 "enable_recv_pipe": true, 00:16:40.022 "enable_quickack": false, 00:16:40.022 "enable_placement_id": 0, 00:16:40.022 "enable_zerocopy_send_server": true, 00:16:40.022 "enable_zerocopy_send_client": false, 00:16:40.022 "zerocopy_threshold": 0, 00:16:40.022 "tls_version": 0, 00:16:40.022 "enable_ktls": false 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "sock_impl_set_options", 00:16:40.022 "params": { 00:16:40.022 "impl_name": "uring", 00:16:40.022 "recv_buf_size": 2097152, 00:16:40.022 "send_buf_size": 2097152, 00:16:40.022 "enable_recv_pipe": true, 00:16:40.022 "enable_quickack": false, 00:16:40.022 "enable_placement_id": 0, 00:16:40.022 "enable_zerocopy_send_server": false, 00:16:40.022 "enable_zerocopy_send_client": false, 00:16:40.022 "zerocopy_threshold": 0, 00:16:40.022 "tls_version": 0, 00:16:40.022 "enable_ktls": false 00:16:40.022 } 00:16:40.022 } 00:16:40.022 ] 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "subsystem": "vmd", 00:16:40.022 "config": [] 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "subsystem": "accel", 00:16:40.022 "config": [ 00:16:40.022 { 00:16:40.022 "method": "accel_set_options", 00:16:40.022 "params": { 00:16:40.022 "small_cache_size": 128, 00:16:40.022 "large_cache_size": 16, 00:16:40.022 "task_count": 2048, 00:16:40.022 "sequence_count": 2048, 00:16:40.022 "buf_count": 2048 00:16:40.022 } 00:16:40.022 } 00:16:40.022 ] 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "subsystem": "bdev", 00:16:40.022 "config": [ 00:16:40.022 { 00:16:40.022 "method": "bdev_set_options", 00:16:40.022 "params": { 00:16:40.022 "bdev_io_pool_size": 65535, 00:16:40.022 "bdev_io_cache_size": 256, 00:16:40.022 "bdev_auto_examine": true, 00:16:40.022 "iobuf_small_cache_size": 128, 00:16:40.022 "iobuf_large_cache_size": 16 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "bdev_raid_set_options", 00:16:40.022 "params": { 00:16:40.022 "process_window_size_kb": 1024, 00:16:40.022 "process_max_bandwidth_mb_sec": 0 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "bdev_iscsi_set_options", 00:16:40.022 "params": { 00:16:40.022 "timeout_sec": 30 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "bdev_nvme_set_options", 00:16:40.022 "params": { 00:16:40.022 "action_on_timeout": "none", 00:16:40.022 "timeout_us": 0, 00:16:40.022 "timeout_admin_us": 0, 00:16:40.022 "keep_alive_timeout_ms": 10000, 00:16:40.022 "arbitration_burst": 0, 00:16:40.022 "low_priority_weight": 0, 00:16:40.022 "medium_priority_weight": 0, 00:16:40.022 "high_priority_weight": 0, 00:16:40.022 "nvme_adminq_poll_period_us": 10000, 00:16:40.022 "nvme_ioq_poll_period_us": 0, 00:16:40.022 "io_queue_requests": 512, 00:16:40.022 "delay_cmd_submit": true, 00:16:40.022 "transport_retry_count": 4, 00:16:40.022 "bdev_retry_count": 3, 00:16:40.022 "transport_ack_timeout": 0, 00:16:40.022 "ctrlr_loss_timeout_sec": 0, 00:16:40.022 "reconnect_delay_sec": 0, 00:16:40.022 "fast_io_fail_timeout_sec": 0, 00:16:40.022 "disable_auto_failback": false, 00:16:40.022 "generate_uuids": false, 00:16:40.022 "transport_tos": 0, 00:16:40.022 "nvme_error_stat": false, 00:16:40.022 "rdma_srq_size": 0, 00:16:40.022 "io_path_stat": false, 00:16:40.022 "allow_accel_sequence": false, 00:16:40.022 "rdma_max_cq_size": 0, 00:16:40.022 "rdma_cm_event_timeout_ms": 0, 00:16:40.022 "dhchap_digests": [ 00:16:40.022 "sha256", 00:16:40.022 "sha384", 00:16:40.022 "sha512" 00:16:40.022 ], 00:16:40.022 "dhchap_dhgroups": [ 00:16:40.022 "null", 00:16:40.022 "ffdhe2048", 00:16:40.022 "ffdhe3072", 00:16:40.022 "ffdhe4096", 00:16:40.022 "ffdhe6144", 00:16:40.022 "ffdhe8192" 00:16:40.022 ] 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "bdev_nvme_attach_controller", 00:16:40.022 "params": { 00:16:40.022 "name": "TLSTEST", 00:16:40.022 "trtype": "TCP", 00:16:40.022 "adrfam": "IPv4", 00:16:40.022 "traddr": "10.0.0.3", 00:16:40.022 "trsvcid": "4420", 00:16:40.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.022 "prchk_reftag": false, 00:16:40.022 "prchk_guard": false, 00:16:40.022 "ctrlr_loss_timeout_sec": 0, 00:16:40.022 "reconnect_delay_sec": 0, 00:16:40.022 "fast_io_fail_timeout_sec": 0, 00:16:40.022 "psk": "key0", 00:16:40.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.022 "hdgst": false, 00:16:40.022 "ddgst": false, 00:16:40.022 "multipath": "multipath" 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "bdev_nvme_set_hotplug", 00:16:40.022 "params": { 00:16:40.022 "period_us": 100000, 00:16:40.022 "enable": false 00:16:40.022 } 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "method": "bdev_wait_for_examine" 00:16:40.022 } 00:16:40.022 ] 00:16:40.022 }, 00:16:40.022 { 00:16:40.022 "subsystem": "nbd", 00:16:40.022 "config": [] 00:16:40.022 } 00:16:40.022 ] 00:16:40.022 }' 00:16:40.022 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:40.022 [2024-11-26 20:44:34.903306] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:40.022 [2024-11-26 20:44:34.903422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72618 ] 00:16:40.281 [2024-11-26 20:44:35.063810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.281 [2024-11-26 20:44:35.127076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.281 [2024-11-26 20:44:35.258354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.539 [2024-11-26 20:44:35.316440] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:41.107 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.107 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:41.107 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:41.107 Running I/O for 10 seconds... 00:16:43.430 5606.00 IOPS, 21.90 MiB/s [2024-11-26T20:44:39.367Z] 5638.50 IOPS, 22.03 MiB/s [2024-11-26T20:44:40.303Z] 5632.67 IOPS, 22.00 MiB/s [2024-11-26T20:44:41.236Z] 5614.00 IOPS, 21.93 MiB/s [2024-11-26T20:44:42.169Z] 5613.00 IOPS, 21.93 MiB/s [2024-11-26T20:44:43.103Z] 5604.17 IOPS, 21.89 MiB/s [2024-11-26T20:44:44.070Z] 5569.57 IOPS, 21.76 MiB/s [2024-11-26T20:44:45.445Z] 5558.50 IOPS, 21.71 MiB/s [2024-11-26T20:44:46.380Z] 5559.44 IOPS, 21.72 MiB/s [2024-11-26T20:44:46.380Z] 5521.50 IOPS, 21.57 MiB/s 00:16:51.387 Latency(us) 00:16:51.387 [2024-11-26T20:44:46.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.387 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:51.387 Verification LBA range: start 0x0 length 0x2000 00:16:51.387 TLSTESTn1 : 10.02 5523.74 21.58 0.00 0.00 23132.78 5336.50 21720.50 00:16:51.387 [2024-11-26T20:44:46.380Z] =================================================================================================================== 00:16:51.388 [2024-11-26T20:44:46.381Z] Total : 5523.74 21.58 0.00 0.00 23132.78 5336.50 21720.50 00:16:51.388 { 00:16:51.388 "results": [ 00:16:51.388 { 00:16:51.388 "job": "TLSTESTn1", 00:16:51.388 "core_mask": "0x4", 00:16:51.388 "workload": "verify", 00:16:51.388 "status": "finished", 00:16:51.388 "verify_range": { 00:16:51.388 "start": 0, 00:16:51.388 "length": 8192 00:16:51.388 }, 00:16:51.388 "queue_depth": 128, 00:16:51.388 "io_size": 4096, 00:16:51.388 "runtime": 10.01894, 00:16:51.388 "iops": 5523.738040151952, 00:16:51.388 "mibps": 21.577101719343563, 00:16:51.388 "io_failed": 0, 00:16:51.388 "io_timeout": 0, 00:16:51.388 "avg_latency_us": 23132.779572769152, 00:16:51.388 "min_latency_us": 5336.5028571428575, 00:16:51.388 "max_latency_us": 21720.502857142856 00:16:51.388 } 00:16:51.388 ], 00:16:51.388 "core_count": 1 00:16:51.388 } 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72618 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72618 ']' 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72618 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72618 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:51.388 killing process with pid 72618 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72618' 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72618 00:16:51.388 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.388 00:16:51.388 Latency(us) 00:16:51.388 [2024-11-26T20:44:46.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.388 [2024-11-26T20:44:46.381Z] =================================================================================================================== 00:16:51.388 [2024-11-26T20:44:46.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72618 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72586 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72586 ']' 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72586 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72586 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:51.388 killing process with pid 72586 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72586' 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72586 00:16:51.388 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72586 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72757 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72757 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72757 ']' 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:51.953 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.953 [2024-11-26 20:44:46.733894] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:51.953 [2024-11-26 20:44:46.734005] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.953 [2024-11-26 20:44:46.887597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.211 [2024-11-26 20:44:46.953171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.211 [2024-11-26 20:44:46.953229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.211 [2024-11-26 20:44:46.953240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.211 [2024-11-26 20:44:46.953248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.211 [2024-11-26 20:44:46.953256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.211 [2024-11-26 20:44:46.953615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.211 [2024-11-26 20:44:47.032237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.MAF01qfXiS 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MAF01qfXiS 00:16:52.777 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:53.035 [2024-11-26 20:44:47.925906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.035 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:53.293 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:53.552 [2024-11-26 20:44:48.445989] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.552 [2024-11-26 20:44:48.446246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:53.552 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:53.811 malloc0 00:16:53.811 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.070 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:54.335 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:54.596 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72813 00:16:54.596 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:54.596 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.596 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72813 /var/tmp/bdevperf.sock 00:16:54.596 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72813 ']' 00:16:54.597 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.597 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.597 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.597 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.597 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.597 [2024-11-26 20:44:49.473253] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:54.597 [2024-11-26 20:44:49.473371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72813 ] 00:16:54.854 [2024-11-26 20:44:49.630216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.854 [2024-11-26 20:44:49.690134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.854 [2024-11-26 20:44:49.739383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:55.796 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.796 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:55.796 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:55.796 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:56.054 [2024-11-26 20:44:50.903631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.054 nvme0n1 00:16:56.054 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.312 Running I/O for 1 seconds... 00:16:57.244 5677.00 IOPS, 22.18 MiB/s 00:16:57.244 Latency(us) 00:16:57.244 [2024-11-26T20:44:52.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.244 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.244 Verification LBA range: start 0x0 length 0x2000 00:16:57.244 nvme0n1 : 1.01 5735.29 22.40 0.00 0.00 22158.11 4369.07 16103.13 00:16:57.244 [2024-11-26T20:44:52.237Z] =================================================================================================================== 00:16:57.244 [2024-11-26T20:44:52.237Z] Total : 5735.29 22.40 0.00 0.00 22158.11 4369.07 16103.13 00:16:57.244 { 00:16:57.244 "results": [ 00:16:57.244 { 00:16:57.244 "job": "nvme0n1", 00:16:57.244 "core_mask": "0x2", 00:16:57.244 "workload": "verify", 00:16:57.244 "status": "finished", 00:16:57.244 "verify_range": { 00:16:57.244 "start": 0, 00:16:57.244 "length": 8192 00:16:57.244 }, 00:16:57.244 "queue_depth": 128, 00:16:57.244 "io_size": 4096, 00:16:57.244 "runtime": 1.012155, 00:16:57.244 "iops": 5735.287579471524, 00:16:57.244 "mibps": 22.40346710731064, 00:16:57.244 "io_failed": 0, 00:16:57.244 "io_timeout": 0, 00:16:57.244 "avg_latency_us": 22158.113778434024, 00:16:57.244 "min_latency_us": 4369.066666666667, 00:16:57.244 "max_latency_us": 16103.131428571429 00:16:57.244 } 00:16:57.244 ], 00:16:57.244 "core_count": 1 00:16:57.244 } 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72813 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72813 ']' 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72813 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72813 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:57.244 killing process with pid 72813 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72813' 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72813 00:16:57.244 Received shutdown signal, test time was about 1.000000 seconds 00:16:57.244 00:16:57.244 Latency(us) 00:16:57.244 [2024-11-26T20:44:52.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.244 [2024-11-26T20:44:52.237Z] =================================================================================================================== 00:16:57.244 [2024-11-26T20:44:52.237Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.244 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72813 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72757 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72757 ']' 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72757 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72757 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.502 killing process with pid 72757 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72757' 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72757 00:16:57.502 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72757 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72865 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72865 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72865 ']' 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.760 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.018 [2024-11-26 20:44:52.771232] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:58.018 [2024-11-26 20:44:52.771338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.018 [2024-11-26 20:44:52.909913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.018 [2024-11-26 20:44:52.974282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.018 [2024-11-26 20:44:52.974337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.018 [2024-11-26 20:44:52.974363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.018 [2024-11-26 20:44:52.974372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.018 [2024-11-26 20:44:52.974381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.018 [2024-11-26 20:44:52.974682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.276 [2024-11-26 20:44:53.051991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.276 [2024-11-26 20:44:53.189122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.276 malloc0 00:16:58.276 [2024-11-26 20:44:53.219408] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.276 [2024-11-26 20:44:53.219634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72889 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72889 /var/tmp/bdevperf.sock 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72889 ']' 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.276 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.534 [2024-11-26 20:44:53.294257] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:58.534 [2024-11-26 20:44:53.294338] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72889 ] 00:16:58.534 [2024-11-26 20:44:53.438737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.534 [2024-11-26 20:44:53.504465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.792 [2024-11-26 20:44:53.553967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:58.792 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.792 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:58.792 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MAF01qfXiS 00:16:59.050 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:59.307 [2024-11-26 20:44:54.083154] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.307 nvme0n1 00:16:59.307 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:59.307 Running I/O for 1 seconds... 00:17:00.285 5537.00 IOPS, 21.63 MiB/s 00:17:00.285 Latency(us) 00:17:00.285 [2024-11-26T20:44:55.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.285 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:00.285 Verification LBA range: start 0x0 length 0x2000 00:17:00.285 nvme0n1 : 1.01 5594.38 21.85 0.00 0.00 22717.51 4306.65 17351.44 00:17:00.285 [2024-11-26T20:44:55.278Z] =================================================================================================================== 00:17:00.285 [2024-11-26T20:44:55.278Z] Total : 5594.38 21.85 0.00 0.00 22717.51 4306.65 17351.44 00:17:00.285 { 00:17:00.285 "results": [ 00:17:00.285 { 00:17:00.285 "job": "nvme0n1", 00:17:00.285 "core_mask": "0x2", 00:17:00.285 "workload": "verify", 00:17:00.285 "status": "finished", 00:17:00.285 "verify_range": { 00:17:00.285 "start": 0, 00:17:00.285 "length": 8192 00:17:00.285 }, 00:17:00.285 "queue_depth": 128, 00:17:00.285 "io_size": 4096, 00:17:00.285 "runtime": 1.012623, 00:17:00.285 "iops": 5594.38211456781, 00:17:00.285 "mibps": 21.85305513503051, 00:17:00.286 "io_failed": 0, 00:17:00.286 "io_timeout": 0, 00:17:00.286 "avg_latency_us": 22717.50778464254, 00:17:00.286 "min_latency_us": 4306.651428571428, 00:17:00.286 "max_latency_us": 17351.43619047619 00:17:00.286 } 00:17:00.286 ], 00:17:00.286 "core_count": 1 00:17:00.286 } 00:17:00.543 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:00.543 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.543 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.543 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.543 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:00.543 "subsystems": [ 00:17:00.543 { 00:17:00.543 "subsystem": "keyring", 00:17:00.543 "config": [ 00:17:00.543 { 00:17:00.543 "method": "keyring_file_add_key", 00:17:00.543 "params": { 00:17:00.543 "name": "key0", 00:17:00.543 "path": "/tmp/tmp.MAF01qfXiS" 00:17:00.543 } 00:17:00.543 } 00:17:00.543 ] 00:17:00.543 }, 00:17:00.543 { 00:17:00.543 "subsystem": "iobuf", 00:17:00.543 "config": [ 00:17:00.543 { 00:17:00.543 "method": "iobuf_set_options", 00:17:00.543 "params": { 00:17:00.543 "small_pool_count": 8192, 00:17:00.543 "large_pool_count": 1024, 00:17:00.543 "small_bufsize": 8192, 00:17:00.543 "large_bufsize": 135168, 00:17:00.543 "enable_numa": false 00:17:00.543 } 00:17:00.543 } 00:17:00.543 ] 00:17:00.543 }, 00:17:00.543 { 00:17:00.543 "subsystem": "sock", 00:17:00.543 "config": [ 00:17:00.543 { 00:17:00.543 "method": "sock_set_default_impl", 00:17:00.543 "params": { 00:17:00.543 "impl_name": "uring" 00:17:00.543 } 00:17:00.543 }, 00:17:00.543 { 00:17:00.543 "method": "sock_impl_set_options", 00:17:00.543 "params": { 00:17:00.543 "impl_name": "ssl", 00:17:00.543 "recv_buf_size": 4096, 00:17:00.543 "send_buf_size": 4096, 00:17:00.543 "enable_recv_pipe": true, 00:17:00.543 "enable_quickack": false, 00:17:00.543 "enable_placement_id": 0, 00:17:00.543 "enable_zerocopy_send_server": true, 00:17:00.543 "enable_zerocopy_send_client": false, 00:17:00.543 "zerocopy_threshold": 0, 00:17:00.543 "tls_version": 0, 00:17:00.543 "enable_ktls": false 00:17:00.543 } 00:17:00.543 }, 00:17:00.543 { 00:17:00.543 "method": "sock_impl_set_options", 00:17:00.543 "params": { 00:17:00.543 "impl_name": "posix", 00:17:00.543 "recv_buf_size": 2097152, 00:17:00.543 "send_buf_size": 2097152, 00:17:00.543 "enable_recv_pipe": true, 00:17:00.544 "enable_quickack": false, 00:17:00.544 "enable_placement_id": 0, 00:17:00.544 "enable_zerocopy_send_server": true, 00:17:00.544 "enable_zerocopy_send_client": false, 00:17:00.544 "zerocopy_threshold": 0, 00:17:00.544 "tls_version": 0, 00:17:00.544 "enable_ktls": false 00:17:00.544 } 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "method": "sock_impl_set_options", 00:17:00.544 "params": { 00:17:00.544 "impl_name": "uring", 00:17:00.544 "recv_buf_size": 2097152, 00:17:00.544 "send_buf_size": 2097152, 00:17:00.544 "enable_recv_pipe": true, 00:17:00.544 "enable_quickack": false, 00:17:00.544 "enable_placement_id": 0, 00:17:00.544 "enable_zerocopy_send_server": false, 00:17:00.544 "enable_zerocopy_send_client": false, 00:17:00.544 "zerocopy_threshold": 0, 00:17:00.544 "tls_version": 0, 00:17:00.544 "enable_ktls": false 00:17:00.544 } 00:17:00.544 } 00:17:00.544 ] 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "subsystem": "vmd", 00:17:00.544 "config": [] 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "subsystem": "accel", 00:17:00.544 "config": [ 00:17:00.544 { 00:17:00.544 "method": "accel_set_options", 00:17:00.544 "params": { 00:17:00.544 "small_cache_size": 128, 00:17:00.544 "large_cache_size": 16, 00:17:00.544 "task_count": 2048, 00:17:00.544 "sequence_count": 2048, 00:17:00.544 "buf_count": 2048 00:17:00.544 } 00:17:00.544 } 00:17:00.544 ] 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "subsystem": "bdev", 00:17:00.544 "config": [ 00:17:00.544 { 00:17:00.544 "method": "bdev_set_options", 00:17:00.544 "params": { 00:17:00.544 "bdev_io_pool_size": 65535, 00:17:00.544 "bdev_io_cache_size": 256, 00:17:00.544 "bdev_auto_examine": true, 00:17:00.544 "iobuf_small_cache_size": 128, 00:17:00.544 "iobuf_large_cache_size": 16 00:17:00.544 } 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "method": "bdev_raid_set_options", 00:17:00.544 "params": { 00:17:00.544 "process_window_size_kb": 1024, 00:17:00.544 "process_max_bandwidth_mb_sec": 0 00:17:00.544 } 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "method": "bdev_iscsi_set_options", 00:17:00.544 "params": { 00:17:00.544 "timeout_sec": 30 00:17:00.544 } 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "method": "bdev_nvme_set_options", 00:17:00.544 "params": { 00:17:00.544 "action_on_timeout": "none", 00:17:00.544 "timeout_us": 0, 00:17:00.544 "timeout_admin_us": 0, 00:17:00.544 "keep_alive_timeout_ms": 10000, 00:17:00.544 "arbitration_burst": 0, 00:17:00.544 "low_priority_weight": 0, 00:17:00.544 "medium_priority_weight": 0, 00:17:00.544 "high_priority_weight": 0, 00:17:00.544 "nvme_adminq_poll_period_us": 10000, 00:17:00.544 "nvme_ioq_poll_period_us": 0, 00:17:00.544 "io_queue_requests": 0, 00:17:00.544 "delay_cmd_submit": true, 00:17:00.544 "transport_retry_count": 4, 00:17:00.544 "bdev_retry_count": 3, 00:17:00.544 "transport_ack_timeout": 0, 00:17:00.544 "ctrlr_loss_timeout_sec": 0, 00:17:00.544 "reconnect_delay_sec": 0, 00:17:00.544 "fast_io_fail_timeout_sec": 0, 00:17:00.544 "disable_auto_failback": false, 00:17:00.544 "generate_uuids": false, 00:17:00.544 "transport_tos": 0, 00:17:00.544 "nvme_error_stat": false, 00:17:00.544 "rdma_srq_size": 0, 00:17:00.544 "io_path_stat": false, 00:17:00.544 "allow_accel_sequence": false, 00:17:00.544 "rdma_max_cq_size": 0, 00:17:00.544 "rdma_cm_event_timeout_ms": 0, 00:17:00.544 "dhchap_digests": [ 00:17:00.544 "sha256", 00:17:00.544 "sha384", 00:17:00.544 "sha512" 00:17:00.544 ], 00:17:00.544 "dhchap_dhgroups": [ 00:17:00.544 "null", 00:17:00.544 "ffdhe2048", 00:17:00.544 "ffdhe3072", 00:17:00.544 "ffdhe4096", 00:17:00.544 "ffdhe6144", 00:17:00.544 "ffdhe8192" 00:17:00.544 ] 00:17:00.544 } 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "method": "bdev_nvme_set_hotplug", 00:17:00.544 "params": { 00:17:00.544 "period_us": 100000, 00:17:00.544 "enable": false 00:17:00.544 } 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "method": "bdev_malloc_create", 00:17:00.544 "params": { 00:17:00.544 "name": "malloc0", 00:17:00.544 "num_blocks": 8192, 00:17:00.544 "block_size": 4096, 00:17:00.544 "physical_block_size": 4096, 00:17:00.544 "uuid": "35fdbfbd-845c-44c6-8699-949c4ea0897c", 00:17:00.544 "optimal_io_boundary": 0, 00:17:00.544 "md_size": 0, 00:17:00.544 "dif_type": 0, 00:17:00.544 "dif_is_head_of_md": false, 00:17:00.544 "dif_pi_format": 0 00:17:00.544 } 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "method": "bdev_wait_for_examine" 00:17:00.544 } 00:17:00.544 ] 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "subsystem": "nbd", 00:17:00.544 "config": [] 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "subsystem": "scheduler", 00:17:00.544 "config": [ 00:17:00.544 { 00:17:00.544 "method": "framework_set_scheduler", 00:17:00.544 "params": { 00:17:00.544 "name": "static" 00:17:00.544 } 00:17:00.544 } 00:17:00.544 ] 00:17:00.544 }, 00:17:00.544 { 00:17:00.544 "subsystem": "nvmf", 00:17:00.545 "config": [ 00:17:00.545 { 00:17:00.545 "method": "nvmf_set_config", 00:17:00.545 "params": { 00:17:00.545 "discovery_filter": "match_any", 00:17:00.545 "admin_cmd_passthru": { 00:17:00.545 "identify_ctrlr": false 00:17:00.545 }, 00:17:00.545 "dhchap_digests": [ 00:17:00.545 "sha256", 00:17:00.545 "sha384", 00:17:00.545 "sha512" 00:17:00.545 ], 00:17:00.545 "dhchap_dhgroups": [ 00:17:00.545 "null", 00:17:00.545 "ffdhe2048", 00:17:00.545 "ffdhe3072", 00:17:00.545 "ffdhe4096", 00:17:00.545 "ffdhe6144", 00:17:00.545 "ffdhe8192" 00:17:00.545 ] 00:17:00.545 } 00:17:00.545 }, 00:17:00.545 { 00:17:00.545 "method": "nvmf_set_max_subsystems", 00:17:00.545 "params": { 00:17:00.545 "max_subsystems": 1024 00:17:00.545 } 00:17:00.545 }, 00:17:00.545 { 00:17:00.545 "method": "nvmf_set_crdt", 00:17:00.545 "params": { 00:17:00.545 "crdt1": 0, 00:17:00.545 "crdt2": 0, 00:17:00.545 "crdt3": 0 00:17:00.545 } 00:17:00.545 }, 00:17:00.545 { 00:17:00.545 "method": "nvmf_create_transport", 00:17:00.545 "params": { 00:17:00.545 "trtype": "TCP", 00:17:00.545 "max_queue_depth": 128, 00:17:00.545 "max_io_qpairs_per_ctrlr": 127, 00:17:00.545 "in_capsule_data_size": 4096, 00:17:00.545 "max_io_size": 131072, 00:17:00.545 "io_unit_size": 131072, 00:17:00.545 "max_aq_depth": 128, 00:17:00.545 "num_shared_buffers": 511, 00:17:00.545 "buf_cache_size": 4294967295, 00:17:00.545 "dif_insert_or_strip": false, 00:17:00.545 "zcopy": false, 00:17:00.545 "c2h_success": false, 00:17:00.545 "sock_priority": 0, 00:17:00.545 "abort_timeout_sec": 1, 00:17:00.545 "ack_timeout": 0, 00:17:00.545 "data_wr_pool_size": 0 00:17:00.545 } 00:17:00.545 }, 00:17:00.545 { 00:17:00.545 "method": "nvmf_create_subsystem", 00:17:00.545 "params": { 00:17:00.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.545 "allow_any_host": false, 00:17:00.545 "serial_number": "00000000000000000000", 00:17:00.545 "model_number": "SPDK bdev Controller", 00:17:00.545 "max_namespaces": 32, 00:17:00.545 "min_cntlid": 1, 00:17:00.545 "max_cntlid": 65519, 00:17:00.545 "ana_reporting": false 00:17:00.545 } 00:17:00.545 }, 00:17:00.545 { 00:17:00.545 "method": "nvmf_subsystem_add_host", 00:17:00.545 "params": { 00:17:00.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.545 "host": "nqn.2016-06.io.spdk:host1", 00:17:00.545 "psk": "key0" 00:17:00.545 } 00:17:00.545 }, 00:17:00.545 { 00:17:00.545 "method": "nvmf_subsystem_add_ns", 00:17:00.545 "params": { 00:17:00.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.545 "namespace": { 00:17:00.545 "nsid": 1, 00:17:00.545 "bdev_name": "malloc0", 00:17:00.545 "nguid": "35FDBFBD845C44C68699949C4EA0897C", 00:17:00.545 "uuid": "35fdbfbd-845c-44c6-8699-949c4ea0897c", 00:17:00.545 "no_auto_visible": false 00:17:00.545 } 00:17:00.545 } 00:17:00.545 }, 00:17:00.545 { 00:17:00.545 "method": "nvmf_subsystem_add_listener", 00:17:00.545 "params": { 00:17:00.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.545 "listen_address": { 00:17:00.545 "trtype": "TCP", 00:17:00.545 "adrfam": "IPv4", 00:17:00.545 "traddr": "10.0.0.3", 00:17:00.545 "trsvcid": "4420" 00:17:00.545 }, 00:17:00.545 "secure_channel": false, 00:17:00.545 "sock_impl": "ssl" 00:17:00.545 } 00:17:00.545 } 00:17:00.545 ] 00:17:00.545 } 00:17:00.545 ] 00:17:00.545 }' 00:17:00.545 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:01.112 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:01.112 "subsystems": [ 00:17:01.112 { 00:17:01.112 "subsystem": "keyring", 00:17:01.112 "config": [ 00:17:01.112 { 00:17:01.112 "method": "keyring_file_add_key", 00:17:01.112 "params": { 00:17:01.112 "name": "key0", 00:17:01.112 "path": "/tmp/tmp.MAF01qfXiS" 00:17:01.112 } 00:17:01.112 } 00:17:01.112 ] 00:17:01.112 }, 00:17:01.112 { 00:17:01.112 "subsystem": "iobuf", 00:17:01.112 "config": [ 00:17:01.112 { 00:17:01.112 "method": "iobuf_set_options", 00:17:01.112 "params": { 00:17:01.112 "small_pool_count": 8192, 00:17:01.112 "large_pool_count": 1024, 00:17:01.112 "small_bufsize": 8192, 00:17:01.112 "large_bufsize": 135168, 00:17:01.112 "enable_numa": false 00:17:01.112 } 00:17:01.112 } 00:17:01.112 ] 00:17:01.112 }, 00:17:01.112 { 00:17:01.112 "subsystem": "sock", 00:17:01.112 "config": [ 00:17:01.112 { 00:17:01.112 "method": "sock_set_default_impl", 00:17:01.112 "params": { 00:17:01.112 "impl_name": "uring" 00:17:01.112 } 00:17:01.112 }, 00:17:01.112 { 00:17:01.112 "method": "sock_impl_set_options", 00:17:01.112 "params": { 00:17:01.112 "impl_name": "ssl", 00:17:01.112 "recv_buf_size": 4096, 00:17:01.112 "send_buf_size": 4096, 00:17:01.112 "enable_recv_pipe": true, 00:17:01.112 "enable_quickack": false, 00:17:01.112 "enable_placement_id": 0, 00:17:01.112 "enable_zerocopy_send_server": true, 00:17:01.112 "enable_zerocopy_send_client": false, 00:17:01.112 "zerocopy_threshold": 0, 00:17:01.112 "tls_version": 0, 00:17:01.112 "enable_ktls": false 00:17:01.112 } 00:17:01.112 }, 00:17:01.112 { 00:17:01.112 "method": "sock_impl_set_options", 00:17:01.112 "params": { 00:17:01.112 "impl_name": "posix", 00:17:01.112 "recv_buf_size": 2097152, 00:17:01.112 "send_buf_size": 2097152, 00:17:01.112 "enable_recv_pipe": true, 00:17:01.112 "enable_quickack": false, 00:17:01.112 "enable_placement_id": 0, 00:17:01.112 "enable_zerocopy_send_server": true, 00:17:01.112 "enable_zerocopy_send_client": false, 00:17:01.112 "zerocopy_threshold": 0, 00:17:01.112 "tls_version": 0, 00:17:01.113 "enable_ktls": false 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "sock_impl_set_options", 00:17:01.113 "params": { 00:17:01.113 "impl_name": "uring", 00:17:01.113 "recv_buf_size": 2097152, 00:17:01.113 "send_buf_size": 2097152, 00:17:01.113 "enable_recv_pipe": true, 00:17:01.113 "enable_quickack": false, 00:17:01.113 "enable_placement_id": 0, 00:17:01.113 "enable_zerocopy_send_server": false, 00:17:01.113 "enable_zerocopy_send_client": false, 00:17:01.113 "zerocopy_threshold": 0, 00:17:01.113 "tls_version": 0, 00:17:01.113 "enable_ktls": false 00:17:01.113 } 00:17:01.113 } 00:17:01.113 ] 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "subsystem": "vmd", 00:17:01.113 "config": [] 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "subsystem": "accel", 00:17:01.113 "config": [ 00:17:01.113 { 00:17:01.113 "method": "accel_set_options", 00:17:01.113 "params": { 00:17:01.113 "small_cache_size": 128, 00:17:01.113 "large_cache_size": 16, 00:17:01.113 "task_count": 2048, 00:17:01.113 "sequence_count": 2048, 00:17:01.113 "buf_count": 2048 00:17:01.113 } 00:17:01.113 } 00:17:01.113 ] 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "subsystem": "bdev", 00:17:01.113 "config": [ 00:17:01.113 { 00:17:01.113 "method": "bdev_set_options", 00:17:01.113 "params": { 00:17:01.113 "bdev_io_pool_size": 65535, 00:17:01.113 "bdev_io_cache_size": 256, 00:17:01.113 "bdev_auto_examine": true, 00:17:01.113 "iobuf_small_cache_size": 128, 00:17:01.113 "iobuf_large_cache_size": 16 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "bdev_raid_set_options", 00:17:01.113 "params": { 00:17:01.113 "process_window_size_kb": 1024, 00:17:01.113 "process_max_bandwidth_mb_sec": 0 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "bdev_iscsi_set_options", 00:17:01.113 "params": { 00:17:01.113 "timeout_sec": 30 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "bdev_nvme_set_options", 00:17:01.113 "params": { 00:17:01.113 "action_on_timeout": "none", 00:17:01.113 "timeout_us": 0, 00:17:01.113 "timeout_admin_us": 0, 00:17:01.113 "keep_alive_timeout_ms": 10000, 00:17:01.113 "arbitration_burst": 0, 00:17:01.113 "low_priority_weight": 0, 00:17:01.113 "medium_priority_weight": 0, 00:17:01.113 "high_priority_weight": 0, 00:17:01.113 "nvme_adminq_poll_period_us": 10000, 00:17:01.113 "nvme_ioq_poll_period_us": 0, 00:17:01.113 "io_queue_requests": 512, 00:17:01.113 "delay_cmd_submit": true, 00:17:01.113 "transport_retry_count": 4, 00:17:01.113 "bdev_retry_count": 3, 00:17:01.113 "transport_ack_timeout": 0, 00:17:01.113 "ctrlr_loss_timeout_sec": 0, 00:17:01.113 "reconnect_delay_sec": 0, 00:17:01.113 "fast_io_fail_timeout_sec": 0, 00:17:01.113 "disable_auto_failback": false, 00:17:01.113 "generate_uuids": false, 00:17:01.113 "transport_tos": 0, 00:17:01.113 "nvme_error_stat": false, 00:17:01.113 "rdma_srq_size": 0, 00:17:01.113 "io_path_stat": false, 00:17:01.113 "allow_accel_sequence": false, 00:17:01.113 "rdma_max_cq_size": 0, 00:17:01.113 "rdma_cm_event_timeout_ms": 0, 00:17:01.113 "dhchap_digests": [ 00:17:01.113 "sha256", 00:17:01.113 "sha384", 00:17:01.113 "sha512" 00:17:01.113 ], 00:17:01.113 "dhchap_dhgroups": [ 00:17:01.113 "null", 00:17:01.113 "ffdhe2048", 00:17:01.113 "ffdhe3072", 00:17:01.113 "ffdhe4096", 00:17:01.113 "ffdhe6144", 00:17:01.113 "ffdhe8192" 00:17:01.113 ] 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "bdev_nvme_attach_controller", 00:17:01.113 "params": { 00:17:01.113 "name": "nvme0", 00:17:01.113 "trtype": "TCP", 00:17:01.113 "adrfam": "IPv4", 00:17:01.113 "traddr": "10.0.0.3", 00:17:01.113 "trsvcid": "4420", 00:17:01.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.113 "prchk_reftag": false, 00:17:01.113 "prchk_guard": false, 00:17:01.113 "ctrlr_loss_timeout_sec": 0, 00:17:01.113 "reconnect_delay_sec": 0, 00:17:01.113 "fast_io_fail_timeout_sec": 0, 00:17:01.113 "psk": "key0", 00:17:01.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.113 "hdgst": false, 00:17:01.113 "ddgst": false, 00:17:01.113 "multipath": "multipath" 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "bdev_nvme_set_hotplug", 00:17:01.113 "params": { 00:17:01.113 "period_us": 100000, 00:17:01.113 "enable": false 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "bdev_enable_histogram", 00:17:01.113 "params": { 00:17:01.113 "name": "nvme0n1", 00:17:01.113 "enable": true 00:17:01.113 } 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "method": "bdev_wait_for_examine" 00:17:01.113 } 00:17:01.113 ] 00:17:01.113 }, 00:17:01.113 { 00:17:01.113 "subsystem": "nbd", 00:17:01.113 "config": [] 00:17:01.113 } 00:17:01.113 ] 00:17:01.113 }' 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72889 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72889 ']' 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72889 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72889 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:01.113 killing process with pid 72889 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72889' 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72889 00:17:01.113 Received shutdown signal, test time was about 1.000000 seconds 00:17:01.113 00:17:01.113 Latency(us) 00:17:01.113 [2024-11-26T20:44:56.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.113 [2024-11-26T20:44:56.106Z] =================================================================================================================== 00:17:01.113 [2024-11-26T20:44:56.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.113 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72889 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72865 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72865 ']' 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72865 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72865 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.113 killing process with pid 72865 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72865' 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72865 00:17:01.113 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72865 00:17:01.373 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:01.373 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.373 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.373 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:01.373 "subsystems": [ 00:17:01.373 { 00:17:01.373 "subsystem": "keyring", 00:17:01.373 "config": [ 00:17:01.373 { 00:17:01.373 "method": "keyring_file_add_key", 00:17:01.373 "params": { 00:17:01.373 "name": "key0", 00:17:01.373 "path": "/tmp/tmp.MAF01qfXiS" 00:17:01.373 } 00:17:01.373 } 00:17:01.373 ] 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "subsystem": "iobuf", 00:17:01.373 "config": [ 00:17:01.373 { 00:17:01.373 "method": "iobuf_set_options", 00:17:01.373 "params": { 00:17:01.373 "small_pool_count": 8192, 00:17:01.373 "large_pool_count": 1024, 00:17:01.373 "small_bufsize": 8192, 00:17:01.373 "large_bufsize": 135168, 00:17:01.373 "enable_numa": false 00:17:01.373 } 00:17:01.373 } 00:17:01.373 ] 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "subsystem": "sock", 00:17:01.373 "config": [ 00:17:01.373 { 00:17:01.373 "method": "sock_set_default_impl", 00:17:01.373 "params": { 00:17:01.373 "impl_name": "uring" 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "sock_impl_set_options", 00:17:01.373 "params": { 00:17:01.373 "impl_name": "ssl", 00:17:01.373 "recv_buf_size": 4096, 00:17:01.373 "send_buf_size": 4096, 00:17:01.373 "enable_recv_pipe": true, 00:17:01.373 "enable_quickack": false, 00:17:01.373 "enable_placement_id": 0, 00:17:01.373 "enable_zerocopy_send_server": true, 00:17:01.373 "enable_zerocopy_send_client": false, 00:17:01.373 "zerocopy_threshold": 0, 00:17:01.373 "tls_version": 0, 00:17:01.373 "enable_ktls": false 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "sock_impl_set_options", 00:17:01.373 "params": { 00:17:01.373 "impl_name": "posix", 00:17:01.373 "recv_buf_size": 2097152, 00:17:01.373 "send_buf_size": 2097152, 00:17:01.373 "enable_recv_pipe": true, 00:17:01.373 "enable_quickack": false, 00:17:01.373 "enable_placement_id": 0, 00:17:01.373 "enable_zerocopy_send_server": true, 00:17:01.373 "enable_zerocopy_send_client": false, 00:17:01.373 "zerocopy_threshold": 0, 00:17:01.373 "tls_version": 0, 00:17:01.373 "enable_ktls": false 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "sock_impl_set_options", 00:17:01.373 "params": { 00:17:01.373 "impl_name": "uring", 00:17:01.373 "recv_buf_size": 2097152, 00:17:01.373 "send_buf_size": 2097152, 00:17:01.373 "enable_recv_pipe": true, 00:17:01.373 "enable_quickack": false, 00:17:01.373 "enable_placement_id": 0, 00:17:01.373 "enable_zerocopy_send_server": false, 00:17:01.373 "enable_zerocopy_send_client": false, 00:17:01.373 "zerocopy_threshold": 0, 00:17:01.373 "tls_version": 0, 00:17:01.373 "enable_ktls": false 00:17:01.373 } 00:17:01.373 } 00:17:01.373 ] 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "subsystem": "vmd", 00:17:01.373 "config": [] 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "subsystem": "accel", 00:17:01.373 "config": [ 00:17:01.373 { 00:17:01.373 "method": "accel_set_options", 00:17:01.373 "params": { 00:17:01.373 "small_cache_size": 128, 00:17:01.373 "large_cache_size": 16, 00:17:01.373 "task_count": 2048, 00:17:01.373 "sequence_count": 2048, 00:17:01.373 "buf_count": 2048 00:17:01.373 } 00:17:01.373 } 00:17:01.373 ] 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "subsystem": "bdev", 00:17:01.373 "config": [ 00:17:01.373 { 00:17:01.373 "method": "bdev_set_options", 00:17:01.373 "params": { 00:17:01.373 "bdev_io_pool_size": 65535, 00:17:01.373 "bdev_io_cache_size": 256, 00:17:01.373 "bdev_auto_examine": true, 00:17:01.373 "iobuf_small_cache_size": 128, 00:17:01.373 "iobuf_large_cache_size": 16 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "bdev_raid_set_options", 00:17:01.373 "params": { 00:17:01.373 "process_window_size_kb": 1024, 00:17:01.373 "process_max_bandwidth_mb_sec": 0 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "bdev_iscsi_set_options", 00:17:01.373 "params": { 00:17:01.373 "timeout_sec": 30 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "bdev_nvme_set_options", 00:17:01.373 "params": { 00:17:01.373 "action_on_timeout": "none", 00:17:01.373 "timeout_us": 0, 00:17:01.373 "timeout_admin_us": 0, 00:17:01.373 "keep_alive_timeout_ms": 10000, 00:17:01.373 "arbitration_burst": 0, 00:17:01.373 "low_priority_weight": 0, 00:17:01.373 "medium_priority_weight": 0, 00:17:01.373 "high_priority_weight": 0, 00:17:01.373 "nvme_adminq_poll_period_us": 10000, 00:17:01.373 "nvme_ioq_poll_period_us": 0, 00:17:01.373 "io_queue_requests": 0, 00:17:01.373 "delay_cmd_submit": true, 00:17:01.373 "transport_retry_count": 4, 00:17:01.373 "bdev_retry_count": 3, 00:17:01.373 "transport_ack_timeout": 0, 00:17:01.373 "ctrlr_loss_timeout_sec": 0, 00:17:01.373 "reconnect_delay_sec": 0, 00:17:01.373 "fast_io_fail_timeout_sec": 0, 00:17:01.373 "disable_auto_failback": false, 00:17:01.373 "generate_uuids": false, 00:17:01.373 "transport_tos": 0, 00:17:01.373 "nvme_error_stat": false, 00:17:01.373 "rdma_srq_size": 0, 00:17:01.373 "io_path_stat": false, 00:17:01.373 "allow_accel_sequence": false, 00:17:01.373 "rdma_max_cq_size": 0, 00:17:01.373 "rdma_cm_event_timeout_ms": 0, 00:17:01.373 "dhchap_digests": [ 00:17:01.373 "sha256", 00:17:01.373 "sha384", 00:17:01.373 "sha512" 00:17:01.373 ], 00:17:01.373 "dhchap_dhgroups": [ 00:17:01.373 "null", 00:17:01.373 "ffdhe2048", 00:17:01.373 "ffdhe3072", 00:17:01.373 "ffdhe4096", 00:17:01.373 "ffdhe6144", 00:17:01.373 "ffdhe8192" 00:17:01.373 ] 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "bdev_nvme_set_hotplug", 00:17:01.373 "params": { 00:17:01.373 "period_us": 100000, 00:17:01.373 "enable": false 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.373 "method": "bdev_malloc_create", 00:17:01.373 "params": { 00:17:01.373 "name": "malloc0", 00:17:01.373 "num_blocks": 8192, 00:17:01.373 "block_size": 4096, 00:17:01.373 "physical_block_size": 4096, 00:17:01.373 "uuid": "35fdbfbd-845c-44c6-8699-949c4ea0897c", 00:17:01.373 "optimal_io_boundary": 0, 00:17:01.373 "md_size": 0, 00:17:01.373 "dif_type": 0, 00:17:01.373 "dif_is_head_of_md": false, 00:17:01.373 "dif_pi_format": 0 00:17:01.373 } 00:17:01.373 }, 00:17:01.373 { 00:17:01.374 "method": "bdev_wait_for_examine" 00:17:01.374 } 00:17:01.374 ] 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "subsystem": "nbd", 00:17:01.374 "config": [] 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "subsystem": "scheduler", 00:17:01.374 "config": [ 00:17:01.374 { 00:17:01.374 "method": "framework_set_scheduler", 00:17:01.374 "params": { 00:17:01.374 "name": "static" 00:17:01.374 } 00:17:01.374 } 00:17:01.374 ] 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "subsystem": "nvmf", 00:17:01.374 "config": [ 00:17:01.374 { 00:17:01.374 "method": "nvmf_set_config", 00:17:01.374 "params": { 00:17:01.374 "discovery_filter": "match_any", 00:17:01.374 "admin_cmd_passthru": { 00:17:01.374 "identify_ctrlr": false 00:17:01.374 }, 00:17:01.374 "dhchap_digests": [ 00:17:01.374 "sha256", 00:17:01.374 "sha384", 00:17:01.374 "sha512" 00:17:01.374 ], 00:17:01.374 "dhchap_dhgroups": [ 00:17:01.374 "null", 00:17:01.374 "ffdhe2048", 00:17:01.374 "ffdhe3072", 00:17:01.374 "ffdhe4096", 00:17:01.374 "ffdhe6144", 00:17:01.374 "ffdhe8192" 00:17:01.374 ] 00:17:01.374 } 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "method": "nvmf_set_max_subsystems", 00:17:01.374 "params": { 00:17:01.374 "max_subsystems": 1024 00:17:01.374 } 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "method": "nvmf_set_crdt", 00:17:01.374 "params": { 00:17:01.374 "crdt1": 0, 00:17:01.374 "crdt2": 0, 00:17:01.374 "crdt3": 0 00:17:01.374 } 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "method": "nvmf_create_transport", 00:17:01.374 "params": { 00:17:01.374 "trtype": "TCP", 00:17:01.374 "max_queue_depth": 128, 00:17:01.374 "max_io_qpairs_per_ctrlr": 127, 00:17:01.374 "in_capsule_data_size": 4096, 00:17:01.374 "max_io_size": 131072, 00:17:01.374 "io_unit_size": 131072, 00:17:01.374 "max_aq_depth": 128, 00:17:01.374 "num_shared_buffers": 511, 00:17:01.374 "buf_cache_size": 4294967295, 00:17:01.374 "dif_insert_or_strip": false, 00:17:01.374 "zcopy": false, 00:17:01.374 "c2h_success": false, 00:17:01.374 "sock_priority": 0, 00:17:01.374 "abort_timeout_sec": 1, 00:17:01.374 "ack_timeout": 0, 00:17:01.374 "data_wr_pool_size": 0 00:17:01.374 } 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "method": "nvmf_create_subsystem", 00:17:01.374 "params": { 00:17:01.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.374 "allow_any_host": false, 00:17:01.374 "serial_number": "00000000000000000000", 00:17:01.374 "model_number": "SPDK bdev Controller", 00:17:01.374 "max_namespaces": 32, 00:17:01.374 "min_cntlid": 1, 00:17:01.374 "max_cntlid": 65519, 00:17:01.374 "ana_reporting": false 00:17:01.374 } 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "method": "nvmf_subsystem_add_host", 00:17:01.374 "params": { 00:17:01.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.374 "host": "nqn.2016-06.io.spdk:host1", 00:17:01.374 "psk": "key0" 00:17:01.374 } 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "method": "nvmf_subsystem_add_ns", 00:17:01.374 "params": { 00:17:01.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.374 "namespace": { 00:17:01.374 "nsid": 1, 00:17:01.374 "bdev_name": "malloc0", 00:17:01.374 "nguid": "35FDBFBD845C44C68699949C4EA0897C", 00:17:01.374 "uuid": "35fdbfbd-845c-44c6-8699-949c4ea0897c", 00:17:01.374 "no_auto_visible": false 00:17:01.374 } 00:17:01.374 } 00:17:01.374 }, 00:17:01.374 { 00:17:01.374 "method": "nvmf_subsystem_add_listener", 00:17:01.374 "params": { 00:17:01.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.374 "listen_address": { 00:17:01.374 "trtype": "TCP", 00:17:01.374 "adrfam": "IPv4", 00:17:01.374 "traddr": "10.0.0.3", 00:17:01.374 "trsvcid": "4420" 00:17:01.374 }, 00:17:01.374 "secure_channel": false, 00:17:01.374 "sock_impl": "ssl" 00:17:01.374 } 00:17:01.374 } 00:17:01.374 ] 00:17:01.374 } 00:17:01.374 ] 00:17:01.374 }' 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72941 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72941 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72941 ']' 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.374 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.633 [2024-11-26 20:44:56.403330] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:01.633 [2024-11-26 20:44:56.403408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.633 [2024-11-26 20:44:56.543430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.633 [2024-11-26 20:44:56.608114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.633 [2024-11-26 20:44:56.608193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.633 [2024-11-26 20:44:56.608204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.633 [2024-11-26 20:44:56.608213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.633 [2024-11-26 20:44:56.608220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.633 [2024-11-26 20:44:56.608656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.891 [2024-11-26 20:44:56.799252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.150 [2024-11-26 20:44:56.896399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.150 [2024-11-26 20:44:56.928345] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:02.150 [2024-11-26 20:44:56.928586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:02.408 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.408 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:02.408 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.408 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.408 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72970 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72970 /var/tmp/bdevperf.sock 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72970 ']' 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.666 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:02.666 "subsystems": [ 00:17:02.666 { 00:17:02.666 "subsystem": "keyring", 00:17:02.666 "config": [ 00:17:02.666 { 00:17:02.666 "method": "keyring_file_add_key", 00:17:02.666 "params": { 00:17:02.666 "name": "key0", 00:17:02.666 "path": "/tmp/tmp.MAF01qfXiS" 00:17:02.666 } 00:17:02.666 } 00:17:02.666 ] 00:17:02.666 }, 00:17:02.666 { 00:17:02.666 "subsystem": "iobuf", 00:17:02.666 "config": [ 00:17:02.666 { 00:17:02.666 "method": "iobuf_set_options", 00:17:02.666 "params": { 00:17:02.666 "small_pool_count": 8192, 00:17:02.666 "large_pool_count": 1024, 00:17:02.666 "small_bufsize": 8192, 00:17:02.666 "large_bufsize": 135168, 00:17:02.666 "enable_numa": false 00:17:02.666 } 00:17:02.666 } 00:17:02.666 ] 00:17:02.666 }, 00:17:02.666 { 00:17:02.666 "subsystem": "sock", 00:17:02.666 "config": [ 00:17:02.666 { 00:17:02.666 "method": "sock_set_default_impl", 00:17:02.666 "params": { 00:17:02.666 "impl_name": "uring" 00:17:02.666 } 00:17:02.666 }, 00:17:02.666 { 00:17:02.666 "method": "sock_impl_set_options", 00:17:02.666 "params": { 00:17:02.666 "impl_name": "ssl", 00:17:02.666 "recv_buf_size": 4096, 00:17:02.666 "send_buf_size": 4096, 00:17:02.666 "enable_recv_pipe": true, 00:17:02.666 "enable_quickack": false, 00:17:02.666 "enable_placement_id": 0, 00:17:02.666 "enable_zerocopy_send_server": true, 00:17:02.666 "enable_zerocopy_send_client": false, 00:17:02.666 "zerocopy_threshold": 0, 00:17:02.666 "tls_version": 0, 00:17:02.666 "enable_ktls": false 00:17:02.666 } 00:17:02.666 }, 00:17:02.666 { 00:17:02.666 "method": "sock_impl_set_options", 00:17:02.666 "params": { 00:17:02.666 "impl_name": "posix", 00:17:02.666 "recv_buf_size": 2097152, 00:17:02.666 "send_buf_size": 2097152, 00:17:02.666 "enable_recv_pipe": true, 00:17:02.666 "enable_quickack": false, 00:17:02.666 "enable_placement_id": 0, 00:17:02.667 "enable_zerocopy_send_server": true, 00:17:02.667 "enable_zerocopy_send_client": false, 00:17:02.667 "zerocopy_threshold": 0, 00:17:02.667 "tls_version": 0, 00:17:02.667 "enable_ktls": false 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "sock_impl_set_options", 00:17:02.667 "params": { 00:17:02.667 "impl_name": "uring", 00:17:02.667 "recv_buf_size": 2097152, 00:17:02.667 "send_buf_size": 2097152, 00:17:02.667 "enable_recv_pipe": true, 00:17:02.667 "enable_quickack": false, 00:17:02.667 "enable_placement_id": 0, 00:17:02.667 "enable_zerocopy_send_server": false, 00:17:02.667 "enable_zerocopy_send_client": false, 00:17:02.667 "zerocopy_threshold": 0, 00:17:02.667 "tls_version": 0, 00:17:02.667 "enable_ktls": false 00:17:02.667 } 00:17:02.667 } 00:17:02.667 ] 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "subsystem": "vmd", 00:17:02.667 "config": [] 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "subsystem": "accel", 00:17:02.667 "config": [ 00:17:02.667 { 00:17:02.667 "method": "accel_set_options", 00:17:02.667 "params": { 00:17:02.667 "small_cache_size": 128, 00:17:02.667 "large_cache_size": 16, 00:17:02.667 "task_count": 2048, 00:17:02.667 "sequence_count": 2048, 00:17:02.667 "buf_count": 2048 00:17:02.667 } 00:17:02.667 } 00:17:02.667 ] 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "subsystem": "bdev", 00:17:02.667 "config": [ 00:17:02.667 { 00:17:02.667 "method": "bdev_set_options", 00:17:02.667 "params": { 00:17:02.667 "bdev_io_pool_size": 65535, 00:17:02.667 "bdev_io_cache_size": 256, 00:17:02.667 "bdev_auto_examine": true, 00:17:02.667 "iobuf_small_cache_size": 128, 00:17:02.667 "iobuf_large_cache_size": 16 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "bdev_raid_set_options", 00:17:02.667 "params": { 00:17:02.667 "process_window_size_kb": 1024, 00:17:02.667 "process_max_bandwidth_mb_sec": 0 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "bdev_iscsi_set_options", 00:17:02.667 "params": { 00:17:02.667 "timeout_sec": 30 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "bdev_nvme_set_options", 00:17:02.667 "params": { 00:17:02.667 "action_on_timeout": "none", 00:17:02.667 "timeout_us": 0, 00:17:02.667 "timeout_admin_us": 0, 00:17:02.667 "keep_alive_timeout_ms": 10000, 00:17:02.667 "arbitration_burst": 0, 00:17:02.667 "low_priority_weight": 0, 00:17:02.667 "medium_priority_weight": 0, 00:17:02.667 "high_priority_weight": 0, 00:17:02.667 "nvme_adminq_poll_period_us": 10000, 00:17:02.667 "nvme_ioq_poll_period_us": 0, 00:17:02.667 "io_queue_requests": 512, 00:17:02.667 "delay_cmd_submit": true, 00:17:02.667 "transport_retry_count": 4, 00:17:02.667 "bdev_retry_count": 3, 00:17:02.667 "transport_ack_timeout": 0, 00:17:02.667 "ctrlr_loss_timeout_sec": 0, 00:17:02.667 "reconnect_delay_sec": 0, 00:17:02.667 "fast_io_fail_timeout_sec": 0, 00:17:02.667 "disable_auto_failback": false, 00:17:02.667 "generate_uuids": false, 00:17:02.667 "transport_tos": 0, 00:17:02.667 "nvme_error_stat": false, 00:17:02.667 "rdma_srq_size": 0, 00:17:02.667 "io_path_stat": false, 00:17:02.667 "allow_accel_sequence": false, 00:17:02.667 "rdma_max_cq_size": 0, 00:17:02.667 "rdma_cm_event_timeout_ms": 0, 00:17:02.667 "dhchap_digests": [ 00:17:02.667 "sha256", 00:17:02.667 "sha384", 00:17:02.667 "sha512" 00:17:02.667 ], 00:17:02.667 "dhchap_dhgroups": [ 00:17:02.667 "null", 00:17:02.667 "ffdhe2048", 00:17:02.667 "ffdhe3072", 00:17:02.667 "ffdhe4096", 00:17:02.667 "ffdhe6144", 00:17:02.667 "ffdhe8192" 00:17:02.667 ] 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "bdev_nvme_attach_controller", 00:17:02.667 "params": { 00:17:02.667 "name": "nvme0", 00:17:02.667 "trtype": "TCP", 00:17:02.667 "adrfam": "IPv4", 00:17:02.667 "traddr": "10.0.0.3", 00:17:02.667 "trsvcid": "4420", 00:17:02.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.667 "prchk_reftag": false, 00:17:02.667 "prchk_guard": false, 00:17:02.667 "ctrlr_loss_timeout_sec": 0, 00:17:02.667 "reconnect_delay_sec": 0, 00:17:02.667 "fast_io_fail_timeout_sec": 0, 00:17:02.667 "psk": "key0", 00:17:02.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.667 "hdgst": false, 00:17:02.667 "ddgst": false, 00:17:02.667 "multipath": "multipath" 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "bdev_nvme_set_hotplug", 00:17:02.667 "params": { 00:17:02.667 "period_us": 100000, 00:17:02.667 "enable": false 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "bdev_enable_histogram", 00:17:02.667 "params": { 00:17:02.667 "name": "nvme0n1", 00:17:02.667 "enable": true 00:17:02.667 } 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "method": "bdev_wait_for_examine" 00:17:02.667 } 00:17:02.667 ] 00:17:02.667 }, 00:17:02.667 { 00:17:02.667 "subsystem": "nbd", 00:17:02.667 "config": [] 00:17:02.667 } 00:17:02.667 ] 00:17:02.667 }' 00:17:02.667 [2024-11-26 20:44:57.489980] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:02.667 [2024-11-26 20:44:57.490062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72970 ] 00:17:02.667 [2024-11-26 20:44:57.639559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.925 [2024-11-26 20:44:57.704688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.925 [2024-11-26 20:44:57.836535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.925 [2024-11-26 20:44:57.895766] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.493 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.493 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:03.493 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:03.493 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:03.752 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.752 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:04.010 Running I/O for 1 seconds... 00:17:04.946 5371.00 IOPS, 20.98 MiB/s 00:17:04.946 Latency(us) 00:17:04.946 [2024-11-26T20:44:59.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.946 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:04.946 Verification LBA range: start 0x0 length 0x2000 00:17:04.946 nvme0n1 : 1.01 5431.41 21.22 0.00 0.00 23397.03 4400.27 17351.44 00:17:04.946 [2024-11-26T20:44:59.939Z] =================================================================================================================== 00:17:04.946 [2024-11-26T20:44:59.939Z] Total : 5431.41 21.22 0.00 0.00 23397.03 4400.27 17351.44 00:17:04.946 { 00:17:04.946 "results": [ 00:17:04.946 { 00:17:04.946 "job": "nvme0n1", 00:17:04.946 "core_mask": "0x2", 00:17:04.946 "workload": "verify", 00:17:04.946 "status": "finished", 00:17:04.946 "verify_range": { 00:17:04.946 "start": 0, 00:17:04.946 "length": 8192 00:17:04.946 }, 00:17:04.946 "queue_depth": 128, 00:17:04.946 "io_size": 4096, 00:17:04.946 "runtime": 1.012444, 00:17:04.946 "iops": 5431.4115151060205, 00:17:04.946 "mibps": 21.216451230882893, 00:17:04.946 "io_failed": 0, 00:17:04.946 "io_timeout": 0, 00:17:04.946 "avg_latency_us": 23397.031314784505, 00:17:04.946 "min_latency_us": 4400.274285714286, 00:17:04.946 "max_latency_us": 17351.43619047619 00:17:04.946 } 00:17:04.946 ], 00:17:04.946 "core_count": 1 00:17:04.946 } 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:04.946 nvmf_trace.0 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72970 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72970 ']' 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72970 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72970 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:04.946 killing process with pid 72970 00:17:04.946 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72970' 00:17:04.947 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72970 00:17:04.947 Received shutdown signal, test time was about 1.000000 seconds 00:17:04.947 00:17:04.947 Latency(us) 00:17:04.947 [2024-11-26T20:44:59.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.947 [2024-11-26T20:44:59.940Z] =================================================================================================================== 00:17:04.947 [2024-11-26T20:44:59.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:04.947 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72970 00:17:05.206 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:05.206 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.206 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:05.206 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.206 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:05.206 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.206 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.206 rmmod nvme_tcp 00:17:05.465 rmmod nvme_fabrics 00:17:05.465 rmmod nvme_keyring 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72941 ']' 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72941 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72941 ']' 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72941 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72941 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.465 killing process with pid 72941 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72941' 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72941 00:17:05.465 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72941 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:05.723 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.vVAXMftipm /tmp/tmp.r6YkqMnUH5 /tmp/tmp.MAF01qfXiS 00:17:05.980 00:17:05.980 real 1m28.596s 00:17:05.980 user 2m19.817s 00:17:05.980 sys 0m30.770s 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.980 ************************************ 00:17:05.980 END TEST nvmf_tls 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.980 ************************************ 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:05.980 20:45:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.981 20:45:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.981 20:45:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.981 ************************************ 00:17:05.981 START TEST nvmf_fips 00:17:05.981 ************************************ 00:17:05.981 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:06.239 * Looking for test storage... 00:17:06.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.239 --rc genhtml_branch_coverage=1 00:17:06.239 --rc genhtml_function_coverage=1 00:17:06.239 --rc genhtml_legend=1 00:17:06.239 --rc geninfo_all_blocks=1 00:17:06.239 --rc geninfo_unexecuted_blocks=1 00:17:06.239 00:17:06.239 ' 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.239 --rc genhtml_branch_coverage=1 00:17:06.239 --rc genhtml_function_coverage=1 00:17:06.239 --rc genhtml_legend=1 00:17:06.239 --rc geninfo_all_blocks=1 00:17:06.239 --rc geninfo_unexecuted_blocks=1 00:17:06.239 00:17:06.239 ' 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.239 --rc genhtml_branch_coverage=1 00:17:06.239 --rc genhtml_function_coverage=1 00:17:06.239 --rc genhtml_legend=1 00:17:06.239 --rc geninfo_all_blocks=1 00:17:06.239 --rc geninfo_unexecuted_blocks=1 00:17:06.239 00:17:06.239 ' 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:06.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.239 --rc genhtml_branch_coverage=1 00:17:06.239 --rc genhtml_function_coverage=1 00:17:06.239 --rc genhtml_legend=1 00:17:06.239 --rc geninfo_all_blocks=1 00:17:06.239 --rc geninfo_unexecuted_blocks=1 00:17:06.239 00:17:06.239 ' 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.239 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.240 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.240 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:06.241 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:06.498 Error setting digest 00:17:06.498 40927F1BA57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:06.498 40927F1BA57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.498 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:06.499 Cannot find device "nvmf_init_br" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:06.499 Cannot find device "nvmf_init_br2" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:06.499 Cannot find device "nvmf_tgt_br" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.499 Cannot find device "nvmf_tgt_br2" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:06.499 Cannot find device "nvmf_init_br" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:06.499 Cannot find device "nvmf_init_br2" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:06.499 Cannot find device "nvmf_tgt_br" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:06.499 Cannot find device "nvmf_tgt_br2" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:06.499 Cannot find device "nvmf_br" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:06.499 Cannot find device "nvmf_init_if" 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:06.499 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:06.757 Cannot find device "nvmf_init_if2" 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.757 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:17:07.015 00:17:07.015 --- 10.0.0.3 ping statistics --- 00:17:07.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.015 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.015 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.015 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:17:07.015 00:17:07.015 --- 10.0.0.4 ping statistics --- 00:17:07.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.015 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:17:07.015 00:17:07.015 --- 10.0.0.1 ping statistics --- 00:17:07.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.015 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:17:07.015 00:17:07.015 --- 10.0.0.2 ping statistics --- 00:17:07.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.015 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73296 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73296 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73296 ']' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.015 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.015 [2024-11-26 20:45:01.964633] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:07.015 [2024-11-26 20:45:01.964767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.274 [2024-11-26 20:45:02.133008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.274 [2024-11-26 20:45:02.216480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.274 [2024-11-26 20:45:02.216548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.274 [2024-11-26 20:45:02.216571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.274 [2024-11-26 20:45:02.216590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.274 [2024-11-26 20:45:02.216606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.274 [2024-11-26 20:45:02.217061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.533 [2024-11-26 20:45:02.307400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.oXb 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.oXb 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.oXb 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.oXb 00:17:07.533 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.792 [2024-11-26 20:45:02.750171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.792 [2024-11-26 20:45:02.766099] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.792 [2024-11-26 20:45:02.766332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.052 malloc0 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73330 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73330 /var/tmp/bdevperf.sock 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73330 ']' 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:08.052 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.052 [2024-11-26 20:45:02.931462] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:08.052 [2024-11-26 20:45:02.931578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73330 ] 00:17:08.312 [2024-11-26 20:45:03.095376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.312 [2024-11-26 20:45:03.159112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.312 [2024-11-26 20:45:03.209108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.247 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.247 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:09.247 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.oXb 00:17:09.247 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:09.507 [2024-11-26 20:45:04.262086] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.507 TLSTESTn1 00:17:09.507 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.507 Running I/O for 10 seconds... 00:17:11.451 5028.00 IOPS, 19.64 MiB/s [2024-11-26T20:45:07.818Z] 4896.00 IOPS, 19.12 MiB/s [2024-11-26T20:45:08.753Z] 4808.33 IOPS, 18.78 MiB/s [2024-11-26T20:45:09.783Z] 4798.50 IOPS, 18.74 MiB/s [2024-11-26T20:45:10.720Z] 4908.00 IOPS, 19.17 MiB/s [2024-11-26T20:45:11.658Z] 4977.50 IOPS, 19.44 MiB/s [2024-11-26T20:45:12.596Z] 5058.71 IOPS, 19.76 MiB/s [2024-11-26T20:45:13.534Z] 5123.38 IOPS, 20.01 MiB/s [2024-11-26T20:45:14.469Z] 5117.56 IOPS, 19.99 MiB/s [2024-11-26T20:45:14.469Z] 5064.50 IOPS, 19.78 MiB/s 00:17:19.476 Latency(us) 00:17:19.476 [2024-11-26T20:45:14.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.476 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.476 Verification LBA range: start 0x0 length 0x2000 00:17:19.476 TLSTESTn1 : 10.02 5068.97 19.80 0.00 0.00 25208.57 5492.54 27088.21 00:17:19.476 [2024-11-26T20:45:14.469Z] =================================================================================================================== 00:17:19.476 [2024-11-26T20:45:14.469Z] Total : 5068.97 19.80 0.00 0.00 25208.57 5492.54 27088.21 00:17:19.476 { 00:17:19.476 "results": [ 00:17:19.476 { 00:17:19.476 "job": "TLSTESTn1", 00:17:19.476 "core_mask": "0x4", 00:17:19.476 "workload": "verify", 00:17:19.476 "status": "finished", 00:17:19.476 "verify_range": { 00:17:19.476 "start": 0, 00:17:19.476 "length": 8192 00:17:19.476 }, 00:17:19.476 "queue_depth": 128, 00:17:19.476 "io_size": 4096, 00:17:19.476 "runtime": 10.015642, 00:17:19.476 "iops": 5068.9711153813205, 00:17:19.476 "mibps": 19.800668419458283, 00:17:19.476 "io_failed": 0, 00:17:19.476 "io_timeout": 0, 00:17:19.476 "avg_latency_us": 25208.56575686888, 00:17:19.476 "min_latency_us": 5492.540952380952, 00:17:19.476 "max_latency_us": 27088.213333333333 00:17:19.476 } 00:17:19.476 ], 00:17:19.476 "core_count": 1 00:17:19.476 } 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:19.735 nvmf_trace.0 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73330 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73330 ']' 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73330 00:17:19.735 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:19.736 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.736 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73330 00:17:19.736 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:19.736 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:19.736 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73330' 00:17:19.736 killing process with pid 73330 00:17:19.736 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.736 00:17:19.736 Latency(us) 00:17:19.736 [2024-11-26T20:45:14.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.736 [2024-11-26T20:45:14.729Z] =================================================================================================================== 00:17:19.736 [2024-11-26T20:45:14.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.736 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73330 00:17:19.736 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73330 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.994 rmmod nvme_tcp 00:17:19.994 rmmod nvme_fabrics 00:17:19.994 rmmod nvme_keyring 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73296 ']' 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73296 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73296 ']' 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73296 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.994 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73296 00:17:20.253 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:20.253 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:20.253 killing process with pid 73296 00:17:20.253 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73296' 00:17:20.253 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73296 00:17:20.253 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73296 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:20.511 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:20.512 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.oXb 00:17:20.770 00:17:20.770 real 0m14.675s 00:17:20.770 user 0m19.472s 00:17:20.770 sys 0m6.280s 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:20.770 ************************************ 00:17:20.770 END TEST nvmf_fips 00:17:20.770 ************************************ 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.770 ************************************ 00:17:20.770 START TEST nvmf_control_msg_list 00:17:20.770 ************************************ 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:20.770 * Looking for test storage... 00:17:20.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:20.770 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.031 --rc genhtml_branch_coverage=1 00:17:21.031 --rc genhtml_function_coverage=1 00:17:21.031 --rc genhtml_legend=1 00:17:21.031 --rc geninfo_all_blocks=1 00:17:21.031 --rc geninfo_unexecuted_blocks=1 00:17:21.031 00:17:21.031 ' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.031 --rc genhtml_branch_coverage=1 00:17:21.031 --rc genhtml_function_coverage=1 00:17:21.031 --rc genhtml_legend=1 00:17:21.031 --rc geninfo_all_blocks=1 00:17:21.031 --rc geninfo_unexecuted_blocks=1 00:17:21.031 00:17:21.031 ' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.031 --rc genhtml_branch_coverage=1 00:17:21.031 --rc genhtml_function_coverage=1 00:17:21.031 --rc genhtml_legend=1 00:17:21.031 --rc geninfo_all_blocks=1 00:17:21.031 --rc geninfo_unexecuted_blocks=1 00:17:21.031 00:17:21.031 ' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.031 --rc genhtml_branch_coverage=1 00:17:21.031 --rc genhtml_function_coverage=1 00:17:21.031 --rc genhtml_legend=1 00:17:21.031 --rc geninfo_all_blocks=1 00:17:21.031 --rc geninfo_unexecuted_blocks=1 00:17:21.031 00:17:21.031 ' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.031 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:21.032 Cannot find device "nvmf_init_br" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:21.032 Cannot find device "nvmf_init_br2" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:21.032 Cannot find device "nvmf_tgt_br" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.032 Cannot find device "nvmf_tgt_br2" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:21.032 Cannot find device "nvmf_init_br" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:21.032 Cannot find device "nvmf_init_br2" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:21.032 Cannot find device "nvmf_tgt_br" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:21.032 Cannot find device "nvmf_tgt_br2" 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:21.032 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:21.032 Cannot find device "nvmf_br" 00:17:21.032 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:21.032 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:21.032 Cannot find device "nvmf_init_if" 00:17:21.032 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:21.032 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:21.290 Cannot find device "nvmf_init_if2" 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.290 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:21.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:21.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:21.291 00:17:21.291 --- 10.0.0.3 ping statistics --- 00:17:21.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.291 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:21.291 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:21.291 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:21.291 00:17:21.291 --- 10.0.0.4 ping statistics --- 00:17:21.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.291 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:21.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:17:21.291 00:17:21.291 --- 10.0.0.1 ping statistics --- 00:17:21.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.291 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:21.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:17:21.291 00:17:21.291 --- 10.0.0.2 ping statistics --- 00:17:21.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.291 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.291 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73721 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73721 00:17:21.549 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73721 ']' 00:17:21.550 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.550 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.550 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.550 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.550 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:21.550 [2024-11-26 20:45:16.375359] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:21.550 [2024-11-26 20:45:16.375463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.550 [2024-11-26 20:45:16.533126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.808 [2024-11-26 20:45:16.606205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.808 [2024-11-26 20:45:16.606272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.808 [2024-11-26 20:45:16.606287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.808 [2024-11-26 20:45:16.606302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.808 [2024-11-26 20:45:16.606313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.808 [2024-11-26 20:45:16.606760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.808 [2024-11-26 20:45:16.689944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:21.808 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.808 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:21.808 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.808 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.808 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.066 [2024-11-26 20:45:16.842558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.066 Malloc0 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.066 [2024-11-26 20:45:16.885453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73740 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73741 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73742 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73740 00:17:22.066 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:22.326 [2024-11-26 20:45:17.090285] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:22.326 [2024-11-26 20:45:17.090512] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:22.326 [2024-11-26 20:45:17.100224] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:23.260 Initializing NVMe Controllers 00:17:23.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:23.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:23.260 Initialization complete. Launching workers. 00:17:23.260 ======================================================== 00:17:23.260 Latency(us) 00:17:23.260 Device Information : IOPS MiB/s Average min max 00:17:23.260 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3603.90 14.08 277.21 166.29 895.52 00:17:23.260 ======================================================== 00:17:23.260 Total : 3603.90 14.08 277.21 166.29 895.52 00:17:23.260 00:17:23.260 Initializing NVMe Controllers 00:17:23.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:23.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:23.260 Initialization complete. Launching workers. 00:17:23.260 ======================================================== 00:17:23.260 Latency(us) 00:17:23.260 Device Information : IOPS MiB/s Average min max 00:17:23.260 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3604.00 14.08 277.19 180.00 894.98 00:17:23.260 ======================================================== 00:17:23.260 Total : 3604.00 14.08 277.19 180.00 894.98 00:17:23.260 00:17:23.260 Initializing NVMe Controllers 00:17:23.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:23.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:23.260 Initialization complete. Launching workers. 00:17:23.260 ======================================================== 00:17:23.260 Latency(us) 00:17:23.260 Device Information : IOPS MiB/s Average min max 00:17:23.260 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3662.97 14.31 272.71 85.66 892.28 00:17:23.260 ======================================================== 00:17:23.260 Total : 3662.97 14.31 272.71 85.66 892.28 00:17:23.260 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73741 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73742 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.260 rmmod nvme_tcp 00:17:23.260 rmmod nvme_fabrics 00:17:23.260 rmmod nvme_keyring 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73721 ']' 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73721 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73721 ']' 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73721 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.260 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73721 00:17:23.520 killing process with pid 73721 00:17:23.520 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.520 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.520 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73721' 00:17:23.520 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73721 00:17:23.520 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73721 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.778 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:24.037 00:17:24.037 real 0m3.190s 00:17:24.037 user 0m4.722s 00:17:24.037 sys 0m1.788s 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:24.037 ************************************ 00:17:24.037 END TEST nvmf_control_msg_list 00:17:24.037 ************************************ 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:24.037 ************************************ 00:17:24.037 START TEST nvmf_wait_for_buf 00:17:24.037 ************************************ 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:24.037 * Looking for test storage... 00:17:24.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:24.037 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:24.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.296 --rc genhtml_branch_coverage=1 00:17:24.296 --rc genhtml_function_coverage=1 00:17:24.296 --rc genhtml_legend=1 00:17:24.296 --rc geninfo_all_blocks=1 00:17:24.296 --rc geninfo_unexecuted_blocks=1 00:17:24.296 00:17:24.296 ' 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:24.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.296 --rc genhtml_branch_coverage=1 00:17:24.296 --rc genhtml_function_coverage=1 00:17:24.296 --rc genhtml_legend=1 00:17:24.296 --rc geninfo_all_blocks=1 00:17:24.296 --rc geninfo_unexecuted_blocks=1 00:17:24.296 00:17:24.296 ' 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:24.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.296 --rc genhtml_branch_coverage=1 00:17:24.296 --rc genhtml_function_coverage=1 00:17:24.296 --rc genhtml_legend=1 00:17:24.296 --rc geninfo_all_blocks=1 00:17:24.296 --rc geninfo_unexecuted_blocks=1 00:17:24.296 00:17:24.296 ' 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:24.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.296 --rc genhtml_branch_coverage=1 00:17:24.296 --rc genhtml_function_coverage=1 00:17:24.296 --rc genhtml_legend=1 00:17:24.296 --rc geninfo_all_blocks=1 00:17:24.296 --rc geninfo_unexecuted_blocks=1 00:17:24.296 00:17:24.296 ' 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:24.296 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.297 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:24.297 Cannot find device "nvmf_init_br" 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:24.297 Cannot find device "nvmf_init_br2" 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:24.297 Cannot find device "nvmf_tgt_br" 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.297 Cannot find device "nvmf_tgt_br2" 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:24.297 Cannot find device "nvmf_init_br" 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:24.297 Cannot find device "nvmf_init_br2" 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:24.297 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:24.298 Cannot find device "nvmf_tgt_br" 00:17:24.298 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:24.298 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:24.298 Cannot find device "nvmf_tgt_br2" 00:17:24.298 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:24.298 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:24.298 Cannot find device "nvmf_br" 00:17:24.298 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:24.298 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:24.556 Cannot find device "nvmf_init_if" 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:24.556 Cannot find device "nvmf_init_if2" 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:24.556 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:24.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:24.814 00:17:24.814 --- 10.0.0.3 ping statistics --- 00:17:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.814 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:24.814 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:24.814 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:24.814 00:17:24.814 --- 10.0.0.4 ping statistics --- 00:17:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.814 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:24.814 00:17:24.814 --- 10.0.0.1 ping statistics --- 00:17:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.814 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:24.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:17:24.814 00:17:24.814 --- 10.0.0.2 ping statistics --- 00:17:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.814 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73989 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73989 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73989 ']' 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.814 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:24.814 [2024-11-26 20:45:19.723500] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:24.814 [2024-11-26 20:45:19.723602] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.071 [2024-11-26 20:45:19.883019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.072 [2024-11-26 20:45:19.956601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.072 [2024-11-26 20:45:19.956676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.072 [2024-11-26 20:45:19.956692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.072 [2024-11-26 20:45:19.956706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.072 [2024-11-26 20:45:19.956717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.072 [2024-11-26 20:45:19.957101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.008 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 [2024-11-26 20:45:20.887223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 Malloc0 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 [2024-11-26 20:45:20.978895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.009 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:26.268 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.268 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.268 [2024-11-26 20:45:21.002999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:26.268 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.268 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:26.268 [2024-11-26 20:45:21.212306] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:27.643 Initializing NVMe Controllers 00:17:27.643 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:27.643 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:27.643 Initialization complete. Launching workers. 00:17:27.643 ======================================================== 00:17:27.643 Latency(us) 00:17:27.643 Device Information : IOPS MiB/s Average min max 00:17:27.643 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 495.03 61.88 8080.40 5991.58 14006.64 00:17:27.643 ======================================================== 00:17:27.643 Total : 495.03 61.88 8080.40 5991.58 14006.64 00:17:27.643 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4712 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4712 -eq 0 ]] 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.643 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.643 rmmod nvme_tcp 00:17:27.643 rmmod nvme_fabrics 00:17:27.902 rmmod nvme_keyring 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73989 ']' 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73989 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73989 ']' 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73989 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73989 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73989' 00:17:27.902 killing process with pid 73989 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73989 00:17:27.902 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73989 00:17:28.160 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.160 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.160 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.160 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:28.160 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.160 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:28.161 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.161 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.161 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:28.161 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:28.161 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:28.420 00:17:28.420 real 0m4.369s 00:17:28.420 user 0m3.604s 00:17:28.420 sys 0m1.179s 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.420 ************************************ 00:17:28.420 END TEST nvmf_wait_for_buf 00:17:28.420 ************************************ 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 ************************************ 00:17:28.420 START TEST nvmf_nsid 00:17:28.420 ************************************ 00:17:28.420 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:28.680 * Looking for test storage... 00:17:28.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.680 --rc genhtml_branch_coverage=1 00:17:28.680 --rc genhtml_function_coverage=1 00:17:28.680 --rc genhtml_legend=1 00:17:28.680 --rc geninfo_all_blocks=1 00:17:28.680 --rc geninfo_unexecuted_blocks=1 00:17:28.680 00:17:28.680 ' 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.680 --rc genhtml_branch_coverage=1 00:17:28.680 --rc genhtml_function_coverage=1 00:17:28.680 --rc genhtml_legend=1 00:17:28.680 --rc geninfo_all_blocks=1 00:17:28.680 --rc geninfo_unexecuted_blocks=1 00:17:28.680 00:17:28.680 ' 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.680 --rc genhtml_branch_coverage=1 00:17:28.680 --rc genhtml_function_coverage=1 00:17:28.680 --rc genhtml_legend=1 00:17:28.680 --rc geninfo_all_blocks=1 00:17:28.680 --rc geninfo_unexecuted_blocks=1 00:17:28.680 00:17:28.680 ' 00:17:28.680 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.681 --rc genhtml_branch_coverage=1 00:17:28.681 --rc genhtml_function_coverage=1 00:17:28.681 --rc genhtml_legend=1 00:17:28.681 --rc geninfo_all_blocks=1 00:17:28.681 --rc geninfo_unexecuted_blocks=1 00:17:28.681 00:17:28.681 ' 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.681 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:28.681 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:28.682 Cannot find device "nvmf_init_br" 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:28.682 Cannot find device "nvmf_init_br2" 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:28.682 Cannot find device "nvmf_tgt_br" 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.682 Cannot find device "nvmf_tgt_br2" 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:28.682 Cannot find device "nvmf_init_br" 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:28.682 Cannot find device "nvmf_init_br2" 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:28.682 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:28.941 Cannot find device "nvmf_tgt_br" 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:28.941 Cannot find device "nvmf_tgt_br2" 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:28.941 Cannot find device "nvmf_br" 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:28.941 Cannot find device "nvmf_init_if" 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:28.941 Cannot find device "nvmf_init_if2" 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.941 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:29.199 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:29.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:17:29.199 00:17:29.199 --- 10.0.0.3 ping statistics --- 00:17:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.199 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:29.199 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:29.199 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:17:29.199 00:17:29.199 --- 10.0.0.4 ping statistics --- 00:17:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.199 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:17:29.199 00:17:29.199 --- 10.0.0.1 ping statistics --- 00:17:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.199 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:29.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:29.199 00:17:29.199 --- 10.0.0.2 ping statistics --- 00:17:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.199 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74259 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74259 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74259 ']' 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.199 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:29.199 [2024-11-26 20:45:24.126856] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:29.199 [2024-11-26 20:45:24.126958] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.458 [2024-11-26 20:45:24.288386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.458 [2024-11-26 20:45:24.361072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.458 [2024-11-26 20:45:24.361143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.458 [2024-11-26 20:45:24.361170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.458 [2024-11-26 20:45:24.361184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.458 [2024-11-26 20:45:24.361195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.458 [2024-11-26 20:45:24.361651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.458 [2024-11-26 20:45:24.446701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74291 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=71d5b84d-60d2-4336-b806-42f5c9a0144b 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=00e97d1e-8d7e-4653-9721-391392b427e6 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=62a4598a-ae0c-48a1-8692-0d23f7debaea 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.392 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:30.392 null0 00:17:30.393 null1 00:17:30.393 null2 00:17:30.393 [2024-11-26 20:45:25.284862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.393 [2024-11-26 20:45:25.301134] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:30.393 [2024-11-26 20:45:25.301240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74291 ] 00:17:30.393 [2024-11-26 20:45:25.309025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74291 /var/tmp/tgt2.sock 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74291 ']' 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.393 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:30.651 [2024-11-26 20:45:25.460765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.651 [2024-11-26 20:45:25.527915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.651 [2024-11-26 20:45:25.595762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.910 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.910 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:30.910 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:31.477 [2024-11-26 20:45:26.231211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.477 [2024-11-26 20:45:26.247337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:31.477 nvme0n1 nvme0n2 00:17:31.477 nvme1n1 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:31.477 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 71d5b84d-60d2-4336-b806-42f5c9a0144b 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=71d5b84d60d24336b80642f5c9a0144b 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 71D5B84D60D24336B80642F5C9A0144B 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 71D5B84D60D24336B80642F5C9A0144B == \7\1\D\5\B\8\4\D\6\0\D\2\4\3\3\6\B\8\0\6\4\2\F\5\C\9\A\0\1\4\4\B ]] 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 00e97d1e-8d7e-4653-9721-391392b427e6 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=00e97d1e8d7e46539721391392b427e6 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 00E97D1E8D7E46539721391392B427E6 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 00E97D1E8D7E46539721391392B427E6 == \0\0\E\9\7\D\1\E\8\D\7\E\4\6\5\3\9\7\2\1\3\9\1\3\9\2\B\4\2\7\E\6 ]] 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 62a4598a-ae0c-48a1-8692-0d23f7debaea 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:32.855 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:32.856 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:32.856 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:32.856 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=62a4598aae0c48a186920d23f7debaea 00:17:32.856 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 62A4598AAE0C48A186920D23F7DEBAEA 00:17:32.856 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 62A4598AAE0C48A186920D23F7DEBAEA == \6\2\A\4\5\9\8\A\A\E\0\C\4\8\A\1\8\6\9\2\0\D\2\3\F\7\D\E\B\A\E\A ]] 00:17:32.856 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74291 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74291 ']' 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74291 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74291 00:17:33.114 killing process with pid 74291 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74291' 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74291 00:17:33.114 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74291 00:17:33.373 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:33.373 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.373 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:33.373 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.373 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:33.373 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.373 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.373 rmmod nvme_tcp 00:17:33.373 rmmod nvme_fabrics 00:17:33.631 rmmod nvme_keyring 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74259 ']' 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74259 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74259 ']' 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74259 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:33.631 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.632 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74259 00:17:33.632 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.632 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.632 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74259' 00:17:33.632 killing process with pid 74259 00:17:33.632 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74259 00:17:33.632 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74259 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:33.890 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:34.149 00:17:34.149 real 0m5.671s 00:17:34.149 user 0m7.746s 00:17:34.149 sys 0m2.202s 00:17:34.149 ************************************ 00:17:34.149 END TEST nvmf_nsid 00:17:34.149 ************************************ 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.149 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:34.149 20:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:34.149 00:17:34.149 real 5m16.454s 00:17:34.149 user 10m42.640s 00:17:34.149 sys 1m26.459s 00:17:34.149 ************************************ 00:17:34.149 END TEST nvmf_target_extra 00:17:34.149 ************************************ 00:17:34.149 20:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.149 20:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.149 20:45:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:34.149 20:45:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.149 20:45:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.149 20:45:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:34.149 ************************************ 00:17:34.149 START TEST nvmf_host 00:17:34.149 ************************************ 00:17:34.149 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:34.411 * Looking for test storage... 00:17:34.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.411 --rc genhtml_branch_coverage=1 00:17:34.411 --rc genhtml_function_coverage=1 00:17:34.411 --rc genhtml_legend=1 00:17:34.411 --rc geninfo_all_blocks=1 00:17:34.411 --rc geninfo_unexecuted_blocks=1 00:17:34.411 00:17:34.411 ' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.411 --rc genhtml_branch_coverage=1 00:17:34.411 --rc genhtml_function_coverage=1 00:17:34.411 --rc genhtml_legend=1 00:17:34.411 --rc geninfo_all_blocks=1 00:17:34.411 --rc geninfo_unexecuted_blocks=1 00:17:34.411 00:17:34.411 ' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.411 --rc genhtml_branch_coverage=1 00:17:34.411 --rc genhtml_function_coverage=1 00:17:34.411 --rc genhtml_legend=1 00:17:34.411 --rc geninfo_all_blocks=1 00:17:34.411 --rc geninfo_unexecuted_blocks=1 00:17:34.411 00:17:34.411 ' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.411 --rc genhtml_branch_coverage=1 00:17:34.411 --rc genhtml_function_coverage=1 00:17:34.411 --rc genhtml_legend=1 00:17:34.411 --rc geninfo_all_blocks=1 00:17:34.411 --rc geninfo_unexecuted_blocks=1 00:17:34.411 00:17:34.411 ' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.411 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.412 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.412 ************************************ 00:17:34.412 START TEST nvmf_identify 00:17:34.412 ************************************ 00:17:34.412 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:34.672 * Looking for test storage... 00:17:34.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.672 --rc genhtml_branch_coverage=1 00:17:34.672 --rc genhtml_function_coverage=1 00:17:34.672 --rc genhtml_legend=1 00:17:34.672 --rc geninfo_all_blocks=1 00:17:34.672 --rc geninfo_unexecuted_blocks=1 00:17:34.672 00:17:34.672 ' 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.672 --rc genhtml_branch_coverage=1 00:17:34.672 --rc genhtml_function_coverage=1 00:17:34.672 --rc genhtml_legend=1 00:17:34.672 --rc geninfo_all_blocks=1 00:17:34.672 --rc geninfo_unexecuted_blocks=1 00:17:34.672 00:17:34.672 ' 00:17:34.672 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.672 --rc genhtml_branch_coverage=1 00:17:34.672 --rc genhtml_function_coverage=1 00:17:34.672 --rc genhtml_legend=1 00:17:34.673 --rc geninfo_all_blocks=1 00:17:34.673 --rc geninfo_unexecuted_blocks=1 00:17:34.673 00:17:34.673 ' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.673 --rc genhtml_branch_coverage=1 00:17:34.673 --rc genhtml_function_coverage=1 00:17:34.673 --rc genhtml_legend=1 00:17:34.673 --rc geninfo_all_blocks=1 00:17:34.673 --rc geninfo_unexecuted_blocks=1 00:17:34.673 00:17:34.673 ' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.673 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:34.673 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:34.674 Cannot find device "nvmf_init_br" 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:34.674 Cannot find device "nvmf_init_br2" 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:34.674 Cannot find device "nvmf_tgt_br" 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:34.674 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.933 Cannot find device "nvmf_tgt_br2" 00:17:34.933 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:34.933 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:34.933 Cannot find device "nvmf_init_br" 00:17:34.933 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:34.933 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:34.933 Cannot find device "nvmf_init_br2" 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:34.934 Cannot find device "nvmf_tgt_br" 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:34.934 Cannot find device "nvmf_tgt_br2" 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:34.934 Cannot find device "nvmf_br" 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:34.934 Cannot find device "nvmf_init_if" 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:34.934 Cannot find device "nvmf_init_if2" 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:34.934 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:35.193 20:45:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:35.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:35.193 00:17:35.193 --- 10.0.0.3 ping statistics --- 00:17:35.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.193 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:35.193 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:35.193 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:17:35.193 00:17:35.193 --- 10.0.0.4 ping statistics --- 00:17:35.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.193 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:35.193 00:17:35.193 --- 10.0.0.1 ping statistics --- 00:17:35.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.193 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:35.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:35.193 00:17:35.193 --- 10.0.0.2 ping statistics --- 00:17:35.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.193 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74647 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74647 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74647 ']' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.193 20:45:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.452 [2024-11-26 20:45:30.184245] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:35.452 [2024-11-26 20:45:30.184347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.452 [2024-11-26 20:45:30.343235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.452 [2024-11-26 20:45:30.411020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.452 [2024-11-26 20:45:30.411091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.452 [2024-11-26 20:45:30.411107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.452 [2024-11-26 20:45:30.411121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.452 [2024-11-26 20:45:30.411132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.452 [2024-11-26 20:45:30.412625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.452 [2024-11-26 20:45:30.412818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.452 [2024-11-26 20:45:30.412912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.452 [2024-11-26 20:45:30.413040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.710 [2024-11-26 20:45:30.498138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 [2024-11-26 20:45:31.100629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 Malloc0 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 [2024-11-26 20:45:31.222207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.277 [ 00:17:36.277 { 00:17:36.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:36.277 "subtype": "Discovery", 00:17:36.277 "listen_addresses": [ 00:17:36.277 { 00:17:36.277 "trtype": "TCP", 00:17:36.277 "adrfam": "IPv4", 00:17:36.277 "traddr": "10.0.0.3", 00:17:36.277 "trsvcid": "4420" 00:17:36.277 } 00:17:36.277 ], 00:17:36.277 "allow_any_host": true, 00:17:36.277 "hosts": [] 00:17:36.277 }, 00:17:36.277 { 00:17:36.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.277 "subtype": "NVMe", 00:17:36.277 "listen_addresses": [ 00:17:36.277 { 00:17:36.277 "trtype": "TCP", 00:17:36.277 "adrfam": "IPv4", 00:17:36.277 "traddr": "10.0.0.3", 00:17:36.277 "trsvcid": "4420" 00:17:36.277 } 00:17:36.277 ], 00:17:36.277 "allow_any_host": true, 00:17:36.277 "hosts": [], 00:17:36.277 "serial_number": "SPDK00000000000001", 00:17:36.277 "model_number": "SPDK bdev Controller", 00:17:36.277 "max_namespaces": 32, 00:17:36.277 "min_cntlid": 1, 00:17:36.277 "max_cntlid": 65519, 00:17:36.277 "namespaces": [ 00:17:36.277 { 00:17:36.277 "nsid": 1, 00:17:36.277 "bdev_name": "Malloc0", 00:17:36.277 "name": "Malloc0", 00:17:36.277 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:36.277 "eui64": "ABCDEF0123456789", 00:17:36.277 "uuid": "7b9168d4-357b-48fe-8173-db87e3420a38" 00:17:36.277 } 00:17:36.277 ] 00:17:36.277 } 00:17:36.277 ] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.277 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:36.541 [2024-11-26 20:45:31.276612] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:36.541 [2024-11-26 20:45:31.276671] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74682 ] 00:17:36.541 [2024-11-26 20:45:31.432284] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:36.541 [2024-11-26 20:45:31.432364] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:36.541 [2024-11-26 20:45:31.432370] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:36.541 [2024-11-26 20:45:31.432391] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:36.541 [2024-11-26 20:45:31.432404] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:36.541 [2024-11-26 20:45:31.432790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:36.541 [2024-11-26 20:45:31.432840] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e9f750 0 00:17:36.541 [2024-11-26 20:45:31.439179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:36.541 [2024-11-26 20:45:31.439199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:36.541 [2024-11-26 20:45:31.439204] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:36.541 [2024-11-26 20:45:31.439208] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:36.541 [2024-11-26 20:45:31.439245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.439251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.439256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.541 [2024-11-26 20:45:31.439270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:36.541 [2024-11-26 20:45:31.439305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.541 [2024-11-26 20:45:31.447170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.541 [2024-11-26 20:45:31.447186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.541 [2024-11-26 20:45:31.447190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.541 [2024-11-26 20:45:31.447206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:36.541 [2024-11-26 20:45:31.447214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:36.541 [2024-11-26 20:45:31.447220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:36.541 [2024-11-26 20:45:31.447237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.541 [2024-11-26 20:45:31.447254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.541 [2024-11-26 20:45:31.447276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.541 [2024-11-26 20:45:31.447329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.541 [2024-11-26 20:45:31.447335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.541 [2024-11-26 20:45:31.447339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.541 [2024-11-26 20:45:31.447349] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:36.541 [2024-11-26 20:45:31.447357] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:36.541 [2024-11-26 20:45:31.447364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.541 [2024-11-26 20:45:31.447378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.541 [2024-11-26 20:45:31.447393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.541 [2024-11-26 20:45:31.447435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.541 [2024-11-26 20:45:31.447441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.541 [2024-11-26 20:45:31.447445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.541 [2024-11-26 20:45:31.447454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:36.541 [2024-11-26 20:45:31.447462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:36.541 [2024-11-26 20:45:31.447469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.541 [2024-11-26 20:45:31.447483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.541 [2024-11-26 20:45:31.447496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.541 [2024-11-26 20:45:31.447536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.541 [2024-11-26 20:45:31.447542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.541 [2024-11-26 20:45:31.447545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.541 [2024-11-26 20:45:31.447555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:36.541 [2024-11-26 20:45:31.447563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.541 [2024-11-26 20:45:31.447571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.541 [2024-11-26 20:45:31.447578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.541 [2024-11-26 20:45:31.447591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.541 [2024-11-26 20:45:31.447629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.541 [2024-11-26 20:45:31.447635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.542 [2024-11-26 20:45:31.447638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.542 [2024-11-26 20:45:31.447647] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:36.542 [2024-11-26 20:45:31.447652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:36.542 [2024-11-26 20:45:31.447660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:36.542 [2024-11-26 20:45:31.447770] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:36.542 [2024-11-26 20:45:31.447776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:36.542 [2024-11-26 20:45:31.447784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.542 [2024-11-26 20:45:31.447798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.542 [2024-11-26 20:45:31.447812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.542 [2024-11-26 20:45:31.447847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.542 [2024-11-26 20:45:31.447853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.542 [2024-11-26 20:45:31.447858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.542 [2024-11-26 20:45:31.447866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:36.542 [2024-11-26 20:45:31.447875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.542 [2024-11-26 20:45:31.447889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.542 [2024-11-26 20:45:31.447903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.542 [2024-11-26 20:45:31.447940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.542 [2024-11-26 20:45:31.447946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.542 [2024-11-26 20:45:31.447950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.542 [2024-11-26 20:45:31.447958] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:36.542 [2024-11-26 20:45:31.447964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:36.542 [2024-11-26 20:45:31.447971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:36.542 [2024-11-26 20:45:31.447980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:36.542 [2024-11-26 20:45:31.447989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.447993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.542 [2024-11-26 20:45:31.448000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.542 [2024-11-26 20:45:31.448013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.542 [2024-11-26 20:45:31.448089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.542 [2024-11-26 20:45:31.448095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.542 [2024-11-26 20:45:31.448099] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448103] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9f750): datao=0, datal=4096, cccid=0 00:17:36.542 [2024-11-26 20:45:31.448108] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f03740) on tqpair(0x1e9f750): expected_datao=0, payload_size=4096 00:17:36.542 [2024-11-26 20:45:31.448113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448121] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448125] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.542 [2024-11-26 20:45:31.448139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.542 [2024-11-26 20:45:31.448142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.542 [2024-11-26 20:45:31.448166] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:36.542 [2024-11-26 20:45:31.448173] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:36.542 [2024-11-26 20:45:31.448178] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:36.542 [2024-11-26 20:45:31.448188] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:36.542 [2024-11-26 20:45:31.448193] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:36.542 [2024-11-26 20:45:31.448199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:36.542 [2024-11-26 20:45:31.448207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:36.542 [2024-11-26 20:45:31.448214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.542 [2024-11-26 20:45:31.448228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.542 [2024-11-26 20:45:31.448243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.542 [2024-11-26 20:45:31.448282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.542 [2024-11-26 20:45:31.448288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.542 [2024-11-26 20:45:31.448292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.542 [2024-11-26 20:45:31.448303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.542 [2024-11-26 20:45:31.448311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e9f750) 00:17:36.542 [2024-11-26 20:45:31.448317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.542 [2024-11-26 20:45:31.448323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e9f750) 00:17:36.543 [2024-11-26 20:45:31.448336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.543 [2024-11-26 20:45:31.448342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e9f750) 00:17:36.543 [2024-11-26 20:45:31.448355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.543 [2024-11-26 20:45:31.448361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.543 [2024-11-26 20:45:31.448374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.543 [2024-11-26 20:45:31.448380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:36.543 [2024-11-26 20:45:31.448388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:36.543 [2024-11-26 20:45:31.448394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9f750) 00:17:36.543 [2024-11-26 20:45:31.448405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.543 [2024-11-26 20:45:31.448425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03740, cid 0, qid 0 00:17:36.543 [2024-11-26 20:45:31.448431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f038c0, cid 1, qid 0 00:17:36.543 [2024-11-26 20:45:31.448436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03a40, cid 2, qid 0 00:17:36.543 [2024-11-26 20:45:31.448440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.543 [2024-11-26 20:45:31.448445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03d40, cid 4, qid 0 00:17:36.543 [2024-11-26 20:45:31.448518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.543 [2024-11-26 20:45:31.448524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.543 [2024-11-26 20:45:31.448527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03d40) on tqpair=0x1e9f750 00:17:36.543 [2024-11-26 20:45:31.448537] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:36.543 [2024-11-26 20:45:31.448543] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:36.543 [2024-11-26 20:45:31.448552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9f750) 00:17:36.543 [2024-11-26 20:45:31.448562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.543 [2024-11-26 20:45:31.448575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03d40, cid 4, qid 0 00:17:36.543 [2024-11-26 20:45:31.448620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.543 [2024-11-26 20:45:31.448625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.543 [2024-11-26 20:45:31.448629] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448633] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9f750): datao=0, datal=4096, cccid=4 00:17:36.543 [2024-11-26 20:45:31.448638] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f03d40) on tqpair(0x1e9f750): expected_datao=0, payload_size=4096 00:17:36.543 [2024-11-26 20:45:31.448643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448650] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448654] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.543 [2024-11-26 20:45:31.448667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.543 [2024-11-26 20:45:31.448670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03d40) on tqpair=0x1e9f750 00:17:36.543 [2024-11-26 20:45:31.448687] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:36.543 [2024-11-26 20:45:31.448711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9f750) 00:17:36.543 [2024-11-26 20:45:31.448721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.543 [2024-11-26 20:45:31.448728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e9f750) 00:17:36.543 [2024-11-26 20:45:31.448742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.543 [2024-11-26 20:45:31.448760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03d40, cid 4, qid 0 00:17:36.543 [2024-11-26 20:45:31.448766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03ec0, cid 5, qid 0 00:17:36.543 [2024-11-26 20:45:31.448849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.543 [2024-11-26 20:45:31.448854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.543 [2024-11-26 20:45:31.448858] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448862] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9f750): datao=0, datal=1024, cccid=4 00:17:36.543 [2024-11-26 20:45:31.448867] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f03d40) on tqpair(0x1e9f750): expected_datao=0, payload_size=1024 00:17:36.543 [2024-11-26 20:45:31.448872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448878] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448882] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.543 [2024-11-26 20:45:31.448892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.543 [2024-11-26 20:45:31.448896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03ec0) on tqpair=0x1e9f750 00:17:36.543 [2024-11-26 20:45:31.448914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.543 [2024-11-26 20:45:31.448920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.543 [2024-11-26 20:45:31.448924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.543 [2024-11-26 20:45:31.448928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03d40) on tqpair=0x1e9f750 00:17:36.544 [2024-11-26 20:45:31.448937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.448941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9f750) 00:17:36.544 [2024-11-26 20:45:31.448948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.544 [2024-11-26 20:45:31.448964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03d40, cid 4, qid 0 00:17:36.544 [2024-11-26 20:45:31.449021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.544 [2024-11-26 20:45:31.449027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.544 [2024-11-26 20:45:31.449030] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449034] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9f750): datao=0, datal=3072, cccid=4 00:17:36.544 [2024-11-26 20:45:31.449039] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f03d40) on tqpair(0x1e9f750): expected_datao=0, payload_size=3072 00:17:36.544 [2024-11-26 20:45:31.449044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449051] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449054] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.544 [2024-11-26 20:45:31.449068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.544 [2024-11-26 20:45:31.449071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03d40) on tqpair=0x1e9f750 00:17:36.544 [2024-11-26 20:45:31.449083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e9f750) 00:17:36.544 [2024-11-26 20:45:31.449093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.544 [2024-11-26 20:45:31.449110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03d40, cid 4, qid 0 00:17:36.544 [2024-11-26 20:45:31.449171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.544 [2024-11-26 20:45:31.449178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.544 [2024-11-26 20:45:31.449182] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449186] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e9f750): datao=0, datal=8, cccid=4 00:17:36.544 [2024-11-26 20:45:31.449190] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f03d40) on tqpair(0x1e9f750): expected_datao=0, payload_size=8 00:17:36.544 [2024-11-26 20:45:31.449195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449201] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.544 [2024-11-26 20:45:31.449205] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.544 ===================================================== 00:17:36.544 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:36.544 ===================================================== 00:17:36.544 Controller Capabilities/Features 00:17:36.544 ================================ 00:17:36.544 Vendor ID: 0000 00:17:36.544 Subsystem Vendor ID: 0000 00:17:36.544 Serial Number: .................... 00:17:36.544 Model Number: ........................................ 00:17:36.544 Firmware Version: 25.01 00:17:36.544 Recommended Arb Burst: 0 00:17:36.544 IEEE OUI Identifier: 00 00 00 00:17:36.544 Multi-path I/O 00:17:36.544 May have multiple subsystem ports: No 00:17:36.544 May have multiple controllers: No 00:17:36.544 Associated with SR-IOV VF: No 00:17:36.544 Max Data Transfer Size: 131072 00:17:36.544 Max Number of Namespaces: 0 00:17:36.544 Max Number of I/O Queues: 1024 00:17:36.544 NVMe Specification Version (VS): 1.3 00:17:36.544 NVMe Specification Version (Identify): 1.3 00:17:36.544 Maximum Queue Entries: 128 00:17:36.544 Contiguous Queues Required: Yes 00:17:36.544 Arbitration Mechanisms Supported 00:17:36.544 Weighted Round Robin: Not Supported 00:17:36.544 Vendor Specific: Not Supported 00:17:36.544 Reset Timeout: 15000 ms 00:17:36.544 Doorbell Stride: 4 bytes 00:17:36.544 NVM Subsystem Reset: Not Supported 00:17:36.544 Command Sets Supported 00:17:36.544 NVM Command Set: Supported 00:17:36.544 Boot Partition: Not Supported 00:17:36.544 Memory Page Size Minimum: 4096 bytes 00:17:36.544 Memory Page Size Maximum: 4096 bytes 00:17:36.544 Persistent Memory Region: Not Supported 00:17:36.544 Optional Asynchronous Events Supported 00:17:36.544 Namespace Attribute Notices: Not Supported 00:17:36.544 Firmware Activation Notices: Not Supported 00:17:36.544 ANA Change Notices: Not Supported 00:17:36.544 PLE Aggregate Log Change Notices: Not Supported 00:17:36.544 LBA Status Info Alert Notices: Not Supported 00:17:36.544 EGE Aggregate Log Change Notices: Not Supported 00:17:36.544 Normal NVM Subsystem Shutdown event: Not Supported 00:17:36.544 Zone Descriptor Change Notices: Not Supported 00:17:36.544 Discovery Log Change Notices: Supported 00:17:36.544 Controller Attributes 00:17:36.544 128-bit Host Identifier: Not Supported 00:17:36.544 Non-Operational Permissive Mode: Not Supported 00:17:36.544 NVM Sets: Not Supported 00:17:36.544 Read Recovery Levels: Not Supported 00:17:36.544 Endurance Groups: Not Supported 00:17:36.544 Predictable Latency Mode: Not Supported 00:17:36.544 Traffic Based Keep ALive: Not Supported 00:17:36.544 Namespace Granularity: Not Supported 00:17:36.544 SQ Associations: Not Supported 00:17:36.544 UUID List: Not Supported 00:17:36.544 Multi-Domain Subsystem: Not Supported 00:17:36.544 Fixed Capacity Management: Not Supported 00:17:36.544 Variable Capacity Management: Not Supported 00:17:36.544 Delete Endurance Group: Not Supported 00:17:36.544 Delete NVM Set: Not Supported 00:17:36.544 Extended LBA Formats Supported: Not Supported 00:17:36.544 Flexible Data Placement Supported: Not Supported 00:17:36.544 00:17:36.544 Controller Memory Buffer Support 00:17:36.544 ================================ 00:17:36.544 Supported: No 00:17:36.544 00:17:36.544 Persistent Memory Region Support 00:17:36.544 ================================ 00:17:36.544 Supported: No 00:17:36.544 00:17:36.544 Admin Command Set Attributes 00:17:36.544 ============================ 00:17:36.544 Security Send/Receive: Not Supported 00:17:36.545 Format NVM: Not Supported 00:17:36.545 Firmware Activate/Download: Not Supported 00:17:36.545 Namespace Management: Not Supported 00:17:36.545 Device Self-Test: Not Supported 00:17:36.545 Directives: Not Supported 00:17:36.545 NVMe-MI: Not Supported 00:17:36.545 Virtualization Management: Not Supported 00:17:36.545 Doorbell Buffer Config: Not Supported 00:17:36.545 Get LBA Status Capability: Not Supported 00:17:36.545 Command & Feature Lockdown Capability: Not Supported 00:17:36.545 Abort Command Limit: 1 00:17:36.545 Async Event Request Limit: 4 00:17:36.545 Number of Firmware Slots: N/A 00:17:36.545 Firmware Slot 1 Read-Only: N/A 00:17:36.545 Firmware Activation Without Reset: N/A 00:17:36.545 Multiple Update Detection Support: N/A 00:17:36.545 Firmware Update Granularity: No Information Provided 00:17:36.545 Per-Namespace SMART Log: No 00:17:36.545 Asymmetric Namespace Access Log Page: Not Supported 00:17:36.545 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:36.545 Command Effects Log Page: Not Supported 00:17:36.545 Get Log Page Extended Data: Supported 00:17:36.545 Telemetry Log Pages: Not Supported 00:17:36.545 Persistent Event Log Pages: Not Supported 00:17:36.545 Supported Log Pages Log Page: May Support 00:17:36.545 Commands Supported & Effects Log Page: Not Supported 00:17:36.545 Feature Identifiers & Effects Log Page:May Support 00:17:36.545 NVMe-MI Commands & Effects Log Page: May Support 00:17:36.545 Data Area 4 for Telemetry Log: Not Supported 00:17:36.545 Error Log Page Entries Supported: 128 00:17:36.545 Keep Alive: Not Supported 00:17:36.545 00:17:36.545 NVM Command Set Attributes 00:17:36.545 ========================== 00:17:36.545 Submission Queue Entry Size 00:17:36.545 Max: 1 00:17:36.545 Min: 1 00:17:36.545 Completion Queue Entry Size 00:17:36.545 Max: 1 00:17:36.545 Min: 1 00:17:36.545 Number of Namespaces: 0 00:17:36.545 Compare Command: Not Supported 00:17:36.545 Write Uncorrectable Command: Not Supported 00:17:36.545 Dataset Management Command: Not Supported 00:17:36.545 Write Zeroes Command: Not Supported 00:17:36.545 Set Features Save Field: Not Supported 00:17:36.545 Reservations: Not Supported 00:17:36.545 Timestamp: Not Supported 00:17:36.545 Copy: Not Supported 00:17:36.545 Volatile Write Cache: Not Present 00:17:36.545 Atomic Write Unit (Normal): 1 00:17:36.545 Atomic Write Unit (PFail): 1 00:17:36.545 Atomic Compare & Write Unit: 1 00:17:36.545 Fused Compare & Write: Supported 00:17:36.545 Scatter-Gather List 00:17:36.545 SGL Command Set: Supported 00:17:36.545 SGL Keyed: Supported 00:17:36.545 SGL Bit Bucket Descriptor: Not Supported 00:17:36.545 SGL Metadata Pointer: Not Supported 00:17:36.545 Oversized SGL: Not Supported 00:17:36.545 SGL Metadata Address: Not Supported 00:17:36.545 SGL Offset: Supported 00:17:36.545 Transport SGL Data Block: Not Supported 00:17:36.545 Replay Protected Memory Block: Not Supported 00:17:36.545 00:17:36.545 Firmware Slot Information 00:17:36.545 ========================= 00:17:36.545 Active slot: 0 00:17:36.545 00:17:36.545 00:17:36.545 Error Log 00:17:36.545 ========= 00:17:36.545 00:17:36.545 Active Namespaces 00:17:36.545 ================= 00:17:36.545 Discovery Log Page 00:17:36.545 ================== 00:17:36.545 Generation Counter: 2 00:17:36.545 Number of Records: 2 00:17:36.545 Record Format: 0 00:17:36.545 00:17:36.545 Discovery Log Entry 0 00:17:36.545 ---------------------- 00:17:36.545 Transport Type: 3 (TCP) 00:17:36.545 Address Family: 1 (IPv4) 00:17:36.545 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:36.545 Entry Flags: 00:17:36.545 Duplicate Returned Information: 1 00:17:36.545 Explicit Persistent Connection Support for Discovery: 1 00:17:36.545 Transport Requirements: 00:17:36.545 Secure Channel: Not Required 00:17:36.545 Port ID: 0 (0x0000) 00:17:36.545 Controller ID: 65535 (0xffff) 00:17:36.545 Admin Max SQ Size: 128 00:17:36.545 Transport Service Identifier: 4420 00:17:36.545 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:36.545 Transport Address: 10.0.0.3 00:17:36.545 Discovery Log Entry 1 00:17:36.545 ---------------------- 00:17:36.545 Transport Type: 3 (TCP) 00:17:36.545 Address Family: 1 (IPv4) 00:17:36.545 Subsystem Type: 2 (NVM Subsystem) 00:17:36.545 Entry Flags: 00:17:36.545 Duplicate Returned Information: 0 00:17:36.545 Explicit Persistent Connection Support for Discovery: 0 00:17:36.545 Transport Requirements: 00:17:36.545 Secure Channel: Not Required 00:17:36.545 Port ID: 0 (0x0000) 00:17:36.545 Controller ID: 65535 (0xffff) 00:17:36.545 Admin Max SQ Size: 128 00:17:36.545 Transport Service Identifier: 4420 00:17:36.545 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:36.545 Transport Address: 10.0.0.3 [2024-11-26 20:45:31.449217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.545 [2024-11-26 20:45:31.449223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.545 [2024-11-26 20:45:31.449227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.545 [2024-11-26 20:45:31.449231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03d40) on tqpair=0x1e9f750 00:17:36.545 [2024-11-26 20:45:31.449337] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:36.545 [2024-11-26 20:45:31.449350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03740) on tqpair=0x1e9f750 00:17:36.545 [2024-11-26 20:45:31.449357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.546 [2024-11-26 20:45:31.449363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f038c0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.546 [2024-11-26 20:45:31.449373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03a40) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.546 [2024-11-26 20:45:31.449383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.546 [2024-11-26 20:45:31.449399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.546 [2024-11-26 20:45:31.449413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.546 [2024-11-26 20:45:31.449432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.546 [2024-11-26 20:45:31.449483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.546 [2024-11-26 20:45:31.449489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.546 [2024-11-26 20:45:31.449493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.546 [2024-11-26 20:45:31.449517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.546 [2024-11-26 20:45:31.449533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.546 [2024-11-26 20:45:31.449584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.546 [2024-11-26 20:45:31.449590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.546 [2024-11-26 20:45:31.449594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449603] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:36.546 [2024-11-26 20:45:31.449608] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:36.546 [2024-11-26 20:45:31.449617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.546 [2024-11-26 20:45:31.449631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.546 [2024-11-26 20:45:31.449644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.546 [2024-11-26 20:45:31.449688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.546 [2024-11-26 20:45:31.449693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.546 [2024-11-26 20:45:31.449697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.546 [2024-11-26 20:45:31.449724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.546 [2024-11-26 20:45:31.449737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.546 [2024-11-26 20:45:31.449775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.546 [2024-11-26 20:45:31.449781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.546 [2024-11-26 20:45:31.449785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.546 [2024-11-26 20:45:31.449812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.546 [2024-11-26 20:45:31.449824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.546 [2024-11-26 20:45:31.449862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.546 [2024-11-26 20:45:31.449868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.546 [2024-11-26 20:45:31.449872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.546 [2024-11-26 20:45:31.449899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.546 [2024-11-26 20:45:31.449912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.546 [2024-11-26 20:45:31.449948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.546 [2024-11-26 20:45:31.449954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.546 [2024-11-26 20:45:31.449958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.449970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.449978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.546 [2024-11-26 20:45:31.449985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.546 [2024-11-26 20:45:31.449997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.546 [2024-11-26 20:45:31.450030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.546 [2024-11-26 20:45:31.450036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.546 [2024-11-26 20:45:31.450040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.546 [2024-11-26 20:45:31.450044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.546 [2024-11-26 20:45:31.450053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.547 [2024-11-26 20:45:31.450067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.547 [2024-11-26 20:45:31.450080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.547 [2024-11-26 20:45:31.450116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.547 [2024-11-26 20:45:31.450122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.547 [2024-11-26 20:45:31.450131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.547 [2024-11-26 20:45:31.450143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.547 [2024-11-26 20:45:31.450168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.547 [2024-11-26 20:45:31.450182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.547 [2024-11-26 20:45:31.450221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.547 [2024-11-26 20:45:31.450227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.547 [2024-11-26 20:45:31.450231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.547 [2024-11-26 20:45:31.450243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.547 [2024-11-26 20:45:31.450258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.547 [2024-11-26 20:45:31.450271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.547 [2024-11-26 20:45:31.450306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.547 [2024-11-26 20:45:31.450312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.547 [2024-11-26 20:45:31.450315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.547 [2024-11-26 20:45:31.450328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.547 [2024-11-26 20:45:31.450342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.547 [2024-11-26 20:45:31.450355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.547 [2024-11-26 20:45:31.450388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.547 [2024-11-26 20:45:31.450394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.547 [2024-11-26 20:45:31.450398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.547 [2024-11-26 20:45:31.450410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.547 [2024-11-26 20:45:31.450424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.547 [2024-11-26 20:45:31.450437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.547 [2024-11-26 20:45:31.450478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.547 [2024-11-26 20:45:31.450484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.547 [2024-11-26 20:45:31.450487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.547 [2024-11-26 20:45:31.450500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.547 [2024-11-26 20:45:31.450514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.547 [2024-11-26 20:45:31.450526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.547 [2024-11-26 20:45:31.450559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.547 [2024-11-26 20:45:31.450565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.547 [2024-11-26 20:45:31.450569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.547 [2024-11-26 20:45:31.450581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.547 [2024-11-26 20:45:31.450589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.547 [2024-11-26 20:45:31.450595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.547 [2024-11-26 20:45:31.450608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.547 [2024-11-26 20:45:31.450641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.450647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.450650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.450663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.548 [2024-11-26 20:45:31.450678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.548 [2024-11-26 20:45:31.450690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.548 [2024-11-26 20:45:31.450726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.450731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.450735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.450748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.548 [2024-11-26 20:45:31.450762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.548 [2024-11-26 20:45:31.450775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.548 [2024-11-26 20:45:31.450813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.450819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.450822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.450835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.548 [2024-11-26 20:45:31.450849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.548 [2024-11-26 20:45:31.450862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.548 [2024-11-26 20:45:31.450897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.450903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.450907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.450919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.548 [2024-11-26 20:45:31.450934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.548 [2024-11-26 20:45:31.450946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.548 [2024-11-26 20:45:31.450979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.450985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.450988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.450992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.451001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.451005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.451009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.548 [2024-11-26 20:45:31.451015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.548 [2024-11-26 20:45:31.451028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.548 [2024-11-26 20:45:31.451064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.451070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.451073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.451077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.451086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.451090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.451094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.548 [2024-11-26 20:45:31.451100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.548 [2024-11-26 20:45:31.451113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.548 [2024-11-26 20:45:31.451149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.455164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.455179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.455184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.455195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.455199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.455203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e9f750) 00:17:36.548 [2024-11-26 20:45:31.455210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.548 [2024-11-26 20:45:31.455227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f03bc0, cid 3, qid 0 00:17:36.548 [2024-11-26 20:45:31.455266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.548 [2024-11-26 20:45:31.455272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.548 [2024-11-26 20:45:31.455276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.548 [2024-11-26 20:45:31.455280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f03bc0) on tqpair=0x1e9f750 00:17:36.548 [2024-11-26 20:45:31.455287] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:17:36.548 00:17:36.548 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:36.548 [2024-11-26 20:45:31.498748] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:36.548 [2024-11-26 20:45:31.498821] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74689 ] 00:17:36.810 [2024-11-26 20:45:31.653123] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:36.811 [2024-11-26 20:45:31.657197] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:36.811 [2024-11-26 20:45:31.657210] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:36.811 [2024-11-26 20:45:31.657231] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:36.811 [2024-11-26 20:45:31.657244] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:36.811 [2024-11-26 20:45:31.657558] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:36.811 [2024-11-26 20:45:31.657600] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x107b750 0 00:17:36.811 [2024-11-26 20:45:31.665182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:36.811 [2024-11-26 20:45:31.665200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:36.811 [2024-11-26 20:45:31.665205] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:36.811 [2024-11-26 20:45:31.665209] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:36.811 [2024-11-26 20:45:31.665243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.665249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.665254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.811 [2024-11-26 20:45:31.665266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:36.811 [2024-11-26 20:45:31.665294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.811 [2024-11-26 20:45:31.673173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.811 [2024-11-26 20:45:31.673187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.811 [2024-11-26 20:45:31.673192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.811 [2024-11-26 20:45:31.673206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:36.811 [2024-11-26 20:45:31.673213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:36.811 [2024-11-26 20:45:31.673220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:36.811 [2024-11-26 20:45:31.673236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.811 [2024-11-26 20:45:31.673252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.811 [2024-11-26 20:45:31.673273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.811 [2024-11-26 20:45:31.673315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.811 [2024-11-26 20:45:31.673322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.811 [2024-11-26 20:45:31.673325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.811 [2024-11-26 20:45:31.673335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:36.811 [2024-11-26 20:45:31.673342] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:36.811 [2024-11-26 20:45:31.673349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.811 [2024-11-26 20:45:31.673364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.811 [2024-11-26 20:45:31.673377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.811 [2024-11-26 20:45:31.673411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.811 [2024-11-26 20:45:31.673417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.811 [2024-11-26 20:45:31.673421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.811 [2024-11-26 20:45:31.673430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:36.811 [2024-11-26 20:45:31.673438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:36.811 [2024-11-26 20:45:31.673445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.811 [2024-11-26 20:45:31.673459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.811 [2024-11-26 20:45:31.673472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.811 [2024-11-26 20:45:31.673505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.811 [2024-11-26 20:45:31.673511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.811 [2024-11-26 20:45:31.673514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.811 [2024-11-26 20:45:31.673523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:36.811 [2024-11-26 20:45:31.673532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.811 [2024-11-26 20:45:31.673547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.811 [2024-11-26 20:45:31.673560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.811 [2024-11-26 20:45:31.673598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.811 [2024-11-26 20:45:31.673604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.811 [2024-11-26 20:45:31.673608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.811 [2024-11-26 20:45:31.673612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.811 [2024-11-26 20:45:31.673617] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:36.811 [2024-11-26 20:45:31.673622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:36.811 [2024-11-26 20:45:31.673630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:36.811 [2024-11-26 20:45:31.673740] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:36.812 [2024-11-26 20:45:31.673746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:36.812 [2024-11-26 20:45:31.673754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.673758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.673761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.812 [2024-11-26 20:45:31.673768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.812 [2024-11-26 20:45:31.673781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.812 [2024-11-26 20:45:31.673814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.812 [2024-11-26 20:45:31.673820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.812 [2024-11-26 20:45:31.673825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.673829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.812 [2024-11-26 20:45:31.673834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:36.812 [2024-11-26 20:45:31.673843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.673847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.673851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.812 [2024-11-26 20:45:31.673857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.812 [2024-11-26 20:45:31.673870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.812 [2024-11-26 20:45:31.673903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.812 [2024-11-26 20:45:31.673909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.812 [2024-11-26 20:45:31.673912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.673916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.812 [2024-11-26 20:45:31.673921] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:36.812 [2024-11-26 20:45:31.673926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:36.812 [2024-11-26 20:45:31.673934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:36.812 [2024-11-26 20:45:31.673943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:36.812 [2024-11-26 20:45:31.673952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.673957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.812 [2024-11-26 20:45:31.673963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.812 [2024-11-26 20:45:31.673976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.812 [2024-11-26 20:45:31.674070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.812 [2024-11-26 20:45:31.674076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.812 [2024-11-26 20:45:31.674080] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674084] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=4096, cccid=0 00:17:36.812 [2024-11-26 20:45:31.674089] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10df740) on tqpair(0x107b750): expected_datao=0, payload_size=4096 00:17:36.812 [2024-11-26 20:45:31.674094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674101] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674105] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.812 [2024-11-26 20:45:31.674119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.812 [2024-11-26 20:45:31.674122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.812 [2024-11-26 20:45:31.674134] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:36.812 [2024-11-26 20:45:31.674139] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:36.812 [2024-11-26 20:45:31.674144] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:36.812 [2024-11-26 20:45:31.674152] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:36.812 [2024-11-26 20:45:31.674168] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:36.812 [2024-11-26 20:45:31.674174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:36.812 [2024-11-26 20:45:31.674182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:36.812 [2024-11-26 20:45:31.674189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.812 [2024-11-26 20:45:31.674204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.812 [2024-11-26 20:45:31.674219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.812 [2024-11-26 20:45:31.674260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.812 [2024-11-26 20:45:31.674266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.812 [2024-11-26 20:45:31.674270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.812 [2024-11-26 20:45:31.674281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x107b750) 00:17:36.812 [2024-11-26 20:45:31.674295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.812 [2024-11-26 20:45:31.674301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x107b750) 00:17:36.812 [2024-11-26 20:45:31.674314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.812 [2024-11-26 20:45:31.674320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.812 [2024-11-26 20:45:31.674328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x107b750) 00:17:36.812 [2024-11-26 20:45:31.674333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.813 [2024-11-26 20:45:31.674339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.813 [2024-11-26 20:45:31.674353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.813 [2024-11-26 20:45:31.674358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x107b750) 00:17:36.813 [2024-11-26 20:45:31.674383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.813 [2024-11-26 20:45:31.674401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df740, cid 0, qid 0 00:17:36.813 [2024-11-26 20:45:31.674407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10df8c0, cid 1, qid 0 00:17:36.813 [2024-11-26 20:45:31.674412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfa40, cid 2, qid 0 00:17:36.813 [2024-11-26 20:45:31.674416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.813 [2024-11-26 20:45:31.674421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfd40, cid 4, qid 0 00:17:36.813 [2024-11-26 20:45:31.674492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.813 [2024-11-26 20:45:31.674498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.813 [2024-11-26 20:45:31.674501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfd40) on tqpair=0x107b750 00:17:36.813 [2024-11-26 20:45:31.674511] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:36.813 [2024-11-26 20:45:31.674517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x107b750) 00:17:36.813 [2024-11-26 20:45:31.674552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.813 [2024-11-26 20:45:31.674566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfd40, cid 4, qid 0 00:17:36.813 [2024-11-26 20:45:31.674602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.813 [2024-11-26 20:45:31.674607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.813 [2024-11-26 20:45:31.674611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfd40) on tqpair=0x107b750 00:17:36.813 [2024-11-26 20:45:31.674668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x107b750) 00:17:36.813 [2024-11-26 20:45:31.674695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.813 [2024-11-26 20:45:31.674709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfd40, cid 4, qid 0 00:17:36.813 [2024-11-26 20:45:31.674758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.813 [2024-11-26 20:45:31.674764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.813 [2024-11-26 20:45:31.674768] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674771] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=4096, cccid=4 00:17:36.813 [2024-11-26 20:45:31.674776] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10dfd40) on tqpair(0x107b750): expected_datao=0, payload_size=4096 00:17:36.813 [2024-11-26 20:45:31.674781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674792] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.813 [2024-11-26 20:45:31.674805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.813 [2024-11-26 20:45:31.674808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfd40) on tqpair=0x107b750 00:17:36.813 [2024-11-26 20:45:31.674821] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:36.813 [2024-11-26 20:45:31.674832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:36.813 [2024-11-26 20:45:31.674848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x107b750) 00:17:36.813 [2024-11-26 20:45:31.674858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.813 [2024-11-26 20:45:31.674872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfd40, cid 4, qid 0 00:17:36.813 [2024-11-26 20:45:31.674938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.813 [2024-11-26 20:45:31.674944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.813 [2024-11-26 20:45:31.674948] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674952] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=4096, cccid=4 00:17:36.813 [2024-11-26 20:45:31.674957] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10dfd40) on tqpair(0x107b750): expected_datao=0, payload_size=4096 00:17:36.813 [2024-11-26 20:45:31.674961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674967] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674971] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.813 [2024-11-26 20:45:31.674979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.813 [2024-11-26 20:45:31.674984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.813 [2024-11-26 20:45:31.674988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.674992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfd40) on tqpair=0x107b750 00:17:36.814 [2024-11-26 20:45:31.675008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x107b750) 00:17:36.814 [2024-11-26 20:45:31.675035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.814 [2024-11-26 20:45:31.675048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfd40, cid 4, qid 0 00:17:36.814 [2024-11-26 20:45:31.675099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.814 [2024-11-26 20:45:31.675105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.814 [2024-11-26 20:45:31.675108] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675112] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=4096, cccid=4 00:17:36.814 [2024-11-26 20:45:31.675117] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10dfd40) on tqpair(0x107b750): expected_datao=0, payload_size=4096 00:17:36.814 [2024-11-26 20:45:31.675122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675128] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675132] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.814 [2024-11-26 20:45:31.675145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.814 [2024-11-26 20:45:31.675149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfd40) on tqpair=0x107b750 00:17:36.814 [2024-11-26 20:45:31.675170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675212] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:36.814 [2024-11-26 20:45:31.675218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:36.814 [2024-11-26 20:45:31.675224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:36.814 [2024-11-26 20:45:31.675239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x107b750) 00:17:36.814 [2024-11-26 20:45:31.675250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.814 [2024-11-26 20:45:31.675257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x107b750) 00:17:36.814 [2024-11-26 20:45:31.675270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.814 [2024-11-26 20:45:31.675289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfd40, cid 4, qid 0 00:17:36.814 [2024-11-26 20:45:31.675295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfec0, cid 5, qid 0 00:17:36.814 [2024-11-26 20:45:31.675354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.814 [2024-11-26 20:45:31.675359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.814 [2024-11-26 20:45:31.675363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfd40) on tqpair=0x107b750 00:17:36.814 [2024-11-26 20:45:31.675373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.814 [2024-11-26 20:45:31.675379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.814 [2024-11-26 20:45:31.675383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfec0) on tqpair=0x107b750 00:17:36.814 [2024-11-26 20:45:31.675396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x107b750) 00:17:36.814 [2024-11-26 20:45:31.675407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.814 [2024-11-26 20:45:31.675420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfec0, cid 5, qid 0 00:17:36.814 [2024-11-26 20:45:31.675454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.814 [2024-11-26 20:45:31.675460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.814 [2024-11-26 20:45:31.675464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfec0) on tqpair=0x107b750 00:17:36.814 [2024-11-26 20:45:31.675477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x107b750) 00:17:36.814 [2024-11-26 20:45:31.675487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.814 [2024-11-26 20:45:31.675500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfec0, cid 5, qid 0 00:17:36.814 [2024-11-26 20:45:31.675568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.814 [2024-11-26 20:45:31.675577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.814 [2024-11-26 20:45:31.675580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfec0) on tqpair=0x107b750 00:17:36.814 [2024-11-26 20:45:31.675594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.814 [2024-11-26 20:45:31.675598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x107b750) 00:17:36.814 [2024-11-26 20:45:31.675604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.814 [2024-11-26 20:45:31.675620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfec0, cid 5, qid 0 00:17:36.815 [2024-11-26 20:45:31.675659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.815 [2024-11-26 20:45:31.675664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.815 [2024-11-26 20:45:31.675668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfec0) on tqpair=0x107b750 00:17:36.815 [2024-11-26 20:45:31.675690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x107b750) 00:17:36.815 [2024-11-26 20:45:31.675701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.815 [2024-11-26 20:45:31.675709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x107b750) 00:17:36.815 [2024-11-26 20:45:31.675719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.815 [2024-11-26 20:45:31.675726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x107b750) 00:17:36.815 [2024-11-26 20:45:31.675736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.815 [2024-11-26 20:45:31.675744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x107b750) 00:17:36.815 [2024-11-26 20:45:31.675754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.815 [2024-11-26 20:45:31.675768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfec0, cid 5, qid 0 00:17:36.815 [2024-11-26 20:45:31.675774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfd40, cid 4, qid 0 00:17:36.815 [2024-11-26 20:45:31.675778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e0040, cid 6, qid 0 00:17:36.815 [2024-11-26 20:45:31.675783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e01c0, cid 7, qid 0 00:17:36.815 [2024-11-26 20:45:31.675901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.815 [2024-11-26 20:45:31.675907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.815 [2024-11-26 20:45:31.675910] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675914] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=8192, cccid=5 00:17:36.815 [2024-11-26 20:45:31.675919] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10dfec0) on tqpair(0x107b750): expected_datao=0, payload_size=8192 00:17:36.815 [2024-11-26 20:45:31.675924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675939] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675943] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.815 [2024-11-26 20:45:31.675954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.815 [2024-11-26 20:45:31.675958] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675961] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=512, cccid=4 00:17:36.815 [2024-11-26 20:45:31.675966] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10dfd40) on tqpair(0x107b750): expected_datao=0, payload_size=512 00:17:36.815 [2024-11-26 20:45:31.675971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675980] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.815 [2024-11-26 20:45:31.675991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.815 [2024-11-26 20:45:31.675995] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.675999] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=512, cccid=6 00:17:36.815 [2024-11-26 20:45:31.676003] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10e0040) on tqpair(0x107b750): expected_datao=0, payload_size=512 00:17:36.815 [2024-11-26 20:45:31.676008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676014] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676018] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:36.815 [2024-11-26 20:45:31.676028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:36.815 [2024-11-26 20:45:31.676032] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676036] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x107b750): datao=0, datal=4096, cccid=7 00:17:36.815 [2024-11-26 20:45:31.676041] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10e01c0) on tqpair(0x107b750): expected_datao=0, payload_size=4096 00:17:36.815 [2024-11-26 20:45:31.676045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676052] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676056] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.815 [2024-11-26 20:45:31.676069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.815 [2024-11-26 20:45:31.676072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfec0) on tqpair=0x107b750 00:17:36.815 [2024-11-26 20:45:31.676090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.815 [2024-11-26 20:45:31.676096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.815 [2024-11-26 20:45:31.676099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfd40) on tqpair=0x107b750 00:17:36.815 [2024-11-26 20:45:31.676116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.815 [2024-11-26 20:45:31.676121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.815 [2024-11-26 20:45:31.676125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10e0040) on tqpair=0x107b750 00:17:36.815 [2024-11-26 20:45:31.676136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.815 [2024-11-26 20:45:31.676142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.815 [2024-11-26 20:45:31.676145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.815 [2024-11-26 20:45:31.676149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10e01c0) on tqpair=0x107b750 00:17:36.815 ===================================================== 00:17:36.815 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.815 ===================================================== 00:17:36.815 Controller Capabilities/Features 00:17:36.815 ================================ 00:17:36.816 Vendor ID: 8086 00:17:36.816 Subsystem Vendor ID: 8086 00:17:36.816 Serial Number: SPDK00000000000001 00:17:36.816 Model Number: SPDK bdev Controller 00:17:36.816 Firmware Version: 25.01 00:17:36.816 Recommended Arb Burst: 6 00:17:36.816 IEEE OUI Identifier: e4 d2 5c 00:17:36.816 Multi-path I/O 00:17:36.816 May have multiple subsystem ports: Yes 00:17:36.816 May have multiple controllers: Yes 00:17:36.816 Associated with SR-IOV VF: No 00:17:36.816 Max Data Transfer Size: 131072 00:17:36.816 Max Number of Namespaces: 32 00:17:36.816 Max Number of I/O Queues: 127 00:17:36.816 NVMe Specification Version (VS): 1.3 00:17:36.816 NVMe Specification Version (Identify): 1.3 00:17:36.816 Maximum Queue Entries: 128 00:17:36.816 Contiguous Queues Required: Yes 00:17:36.816 Arbitration Mechanisms Supported 00:17:36.816 Weighted Round Robin: Not Supported 00:17:36.816 Vendor Specific: Not Supported 00:17:36.816 Reset Timeout: 15000 ms 00:17:36.816 Doorbell Stride: 4 bytes 00:17:36.816 NVM Subsystem Reset: Not Supported 00:17:36.816 Command Sets Supported 00:17:36.816 NVM Command Set: Supported 00:17:36.816 Boot Partition: Not Supported 00:17:36.816 Memory Page Size Minimum: 4096 bytes 00:17:36.816 Memory Page Size Maximum: 4096 bytes 00:17:36.816 Persistent Memory Region: Not Supported 00:17:36.816 Optional Asynchronous Events Supported 00:17:36.816 Namespace Attribute Notices: Supported 00:17:36.816 Firmware Activation Notices: Not Supported 00:17:36.816 ANA Change Notices: Not Supported 00:17:36.816 PLE Aggregate Log Change Notices: Not Supported 00:17:36.816 LBA Status Info Alert Notices: Not Supported 00:17:36.816 EGE Aggregate Log Change Notices: Not Supported 00:17:36.816 Normal NVM Subsystem Shutdown event: Not Supported 00:17:36.816 Zone Descriptor Change Notices: Not Supported 00:17:36.816 Discovery Log Change Notices: Not Supported 00:17:36.816 Controller Attributes 00:17:36.816 128-bit Host Identifier: Supported 00:17:36.816 Non-Operational Permissive Mode: Not Supported 00:17:36.816 NVM Sets: Not Supported 00:17:36.816 Read Recovery Levels: Not Supported 00:17:36.816 Endurance Groups: Not Supported 00:17:36.816 Predictable Latency Mode: Not Supported 00:17:36.816 Traffic Based Keep ALive: Not Supported 00:17:36.816 Namespace Granularity: Not Supported 00:17:36.816 SQ Associations: Not Supported 00:17:36.816 UUID List: Not Supported 00:17:36.816 Multi-Domain Subsystem: Not Supported 00:17:36.816 Fixed Capacity Management: Not Supported 00:17:36.816 Variable Capacity Management: Not Supported 00:17:36.816 Delete Endurance Group: Not Supported 00:17:36.816 Delete NVM Set: Not Supported 00:17:36.816 Extended LBA Formats Supported: Not Supported 00:17:36.816 Flexible Data Placement Supported: Not Supported 00:17:36.816 00:17:36.816 Controller Memory Buffer Support 00:17:36.816 ================================ 00:17:36.816 Supported: No 00:17:36.816 00:17:36.816 Persistent Memory Region Support 00:17:36.816 ================================ 00:17:36.816 Supported: No 00:17:36.816 00:17:36.816 Admin Command Set Attributes 00:17:36.816 ============================ 00:17:36.816 Security Send/Receive: Not Supported 00:17:36.816 Format NVM: Not Supported 00:17:36.816 Firmware Activate/Download: Not Supported 00:17:36.816 Namespace Management: Not Supported 00:17:36.816 Device Self-Test: Not Supported 00:17:36.816 Directives: Not Supported 00:17:36.816 NVMe-MI: Not Supported 00:17:36.816 Virtualization Management: Not Supported 00:17:36.816 Doorbell Buffer Config: Not Supported 00:17:36.816 Get LBA Status Capability: Not Supported 00:17:36.816 Command & Feature Lockdown Capability: Not Supported 00:17:36.816 Abort Command Limit: 4 00:17:36.816 Async Event Request Limit: 4 00:17:36.816 Number of Firmware Slots: N/A 00:17:36.816 Firmware Slot 1 Read-Only: N/A 00:17:36.816 Firmware Activation Without Reset: N/A 00:17:36.816 Multiple Update Detection Support: N/A 00:17:36.816 Firmware Update Granularity: No Information Provided 00:17:36.816 Per-Namespace SMART Log: No 00:17:36.816 Asymmetric Namespace Access Log Page: Not Supported 00:17:36.816 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:36.816 Command Effects Log Page: Supported 00:17:36.816 Get Log Page Extended Data: Supported 00:17:36.816 Telemetry Log Pages: Not Supported 00:17:36.816 Persistent Event Log Pages: Not Supported 00:17:36.816 Supported Log Pages Log Page: May Support 00:17:36.816 Commands Supported & Effects Log Page: Not Supported 00:17:36.816 Feature Identifiers & Effects Log Page:May Support 00:17:36.816 NVMe-MI Commands & Effects Log Page: May Support 00:17:36.816 Data Area 4 for Telemetry Log: Not Supported 00:17:36.816 Error Log Page Entries Supported: 128 00:17:36.816 Keep Alive: Supported 00:17:36.816 Keep Alive Granularity: 10000 ms 00:17:36.816 00:17:36.816 NVM Command Set Attributes 00:17:36.816 ========================== 00:17:36.816 Submission Queue Entry Size 00:17:36.816 Max: 64 00:17:36.816 Min: 64 00:17:36.816 Completion Queue Entry Size 00:17:36.816 Max: 16 00:17:36.816 Min: 16 00:17:36.816 Number of Namespaces: 32 00:17:36.816 Compare Command: Supported 00:17:36.816 Write Uncorrectable Command: Not Supported 00:17:36.816 Dataset Management Command: Supported 00:17:36.816 Write Zeroes Command: Supported 00:17:36.816 Set Features Save Field: Not Supported 00:17:36.816 Reservations: Supported 00:17:36.816 Timestamp: Not Supported 00:17:36.816 Copy: Supported 00:17:36.816 Volatile Write Cache: Present 00:17:36.816 Atomic Write Unit (Normal): 1 00:17:36.817 Atomic Write Unit (PFail): 1 00:17:36.817 Atomic Compare & Write Unit: 1 00:17:36.817 Fused Compare & Write: Supported 00:17:36.817 Scatter-Gather List 00:17:36.817 SGL Command Set: Supported 00:17:36.817 SGL Keyed: Supported 00:17:36.817 SGL Bit Bucket Descriptor: Not Supported 00:17:36.817 SGL Metadata Pointer: Not Supported 00:17:36.817 Oversized SGL: Not Supported 00:17:36.817 SGL Metadata Address: Not Supported 00:17:36.817 SGL Offset: Supported 00:17:36.817 Transport SGL Data Block: Not Supported 00:17:36.817 Replay Protected Memory Block: Not Supported 00:17:36.817 00:17:36.817 Firmware Slot Information 00:17:36.817 ========================= 00:17:36.817 Active slot: 1 00:17:36.817 Slot 1 Firmware Revision: 25.01 00:17:36.817 00:17:36.817 00:17:36.817 Commands Supported and Effects 00:17:36.817 ============================== 00:17:36.817 Admin Commands 00:17:36.817 -------------- 00:17:36.817 Get Log Page (02h): Supported 00:17:36.817 Identify (06h): Supported 00:17:36.817 Abort (08h): Supported 00:17:36.817 Set Features (09h): Supported 00:17:36.817 Get Features (0Ah): Supported 00:17:36.817 Asynchronous Event Request (0Ch): Supported 00:17:36.817 Keep Alive (18h): Supported 00:17:36.817 I/O Commands 00:17:36.817 ------------ 00:17:36.817 Flush (00h): Supported LBA-Change 00:17:36.817 Write (01h): Supported LBA-Change 00:17:36.817 Read (02h): Supported 00:17:36.817 Compare (05h): Supported 00:17:36.817 Write Zeroes (08h): Supported LBA-Change 00:17:36.817 Dataset Management (09h): Supported LBA-Change 00:17:36.817 Copy (19h): Supported LBA-Change 00:17:36.817 00:17:36.817 Error Log 00:17:36.817 ========= 00:17:36.817 00:17:36.817 Arbitration 00:17:36.817 =========== 00:17:36.817 Arbitration Burst: 1 00:17:36.817 00:17:36.817 Power Management 00:17:36.817 ================ 00:17:36.817 Number of Power States: 1 00:17:36.817 Current Power State: Power State #0 00:17:36.817 Power State #0: 00:17:36.817 Max Power: 0.00 W 00:17:36.817 Non-Operational State: Operational 00:17:36.817 Entry Latency: Not Reported 00:17:36.817 Exit Latency: Not Reported 00:17:36.817 Relative Read Throughput: 0 00:17:36.817 Relative Read Latency: 0 00:17:36.817 Relative Write Throughput: 0 00:17:36.817 Relative Write Latency: 0 00:17:36.817 Idle Power: Not Reported 00:17:36.817 Active Power: Not Reported 00:17:36.817 Non-Operational Permissive Mode: Not Supported 00:17:36.817 00:17:36.817 Health Information 00:17:36.817 ================== 00:17:36.817 Critical Warnings: 00:17:36.817 Available Spare Space: OK 00:17:36.817 Temperature: OK 00:17:36.817 Device Reliability: OK 00:17:36.817 Read Only: No 00:17:36.817 Volatile Memory Backup: OK 00:17:36.817 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:36.817 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:36.817 Available Spare: 0% 00:17:36.817 Available Spare Threshold: 0% 00:17:36.817 Life Percentage Used:[2024-11-26 20:45:31.676253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x107b750) 00:17:36.817 [2024-11-26 20:45:31.676265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.817 [2024-11-26 20:45:31.676281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e01c0, cid 7, qid 0 00:17:36.817 [2024-11-26 20:45:31.676320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.817 [2024-11-26 20:45:31.676326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.817 [2024-11-26 20:45:31.676330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10e01c0) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676368] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:36.817 [2024-11-26 20:45:31.676377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df740) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.817 [2024-11-26 20:45:31.676389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10df8c0) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.817 [2024-11-26 20:45:31.676399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfa40) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.817 [2024-11-26 20:45:31.676409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.817 [2024-11-26 20:45:31.676421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.817 [2024-11-26 20:45:31.676436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.817 [2024-11-26 20:45:31.676451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.817 [2024-11-26 20:45:31.676485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.817 [2024-11-26 20:45:31.676491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.817 [2024-11-26 20:45:31.676494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.817 [2024-11-26 20:45:31.676519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.817 [2024-11-26 20:45:31.676534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.817 [2024-11-26 20:45:31.676588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.817 [2024-11-26 20:45:31.676594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.817 [2024-11-26 20:45:31.676598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676607] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:36.817 [2024-11-26 20:45:31.676612] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:36.817 [2024-11-26 20:45:31.676621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.817 [2024-11-26 20:45:31.676635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.817 [2024-11-26 20:45:31.676648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.817 [2024-11-26 20:45:31.676681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.817 [2024-11-26 20:45:31.676687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.817 [2024-11-26 20:45:31.676690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.817 [2024-11-26 20:45:31.676703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.817 [2024-11-26 20:45:31.676712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.817 [2024-11-26 20:45:31.676718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.817 [2024-11-26 20:45:31.676731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.818 [2024-11-26 20:45:31.676769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.818 [2024-11-26 20:45:31.676774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.818 [2024-11-26 20:45:31.676778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.818 [2024-11-26 20:45:31.676791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.818 [2024-11-26 20:45:31.676805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.818 [2024-11-26 20:45:31.676818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.818 [2024-11-26 20:45:31.676851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.818 [2024-11-26 20:45:31.676857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.818 [2024-11-26 20:45:31.676861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.818 [2024-11-26 20:45:31.676873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.818 [2024-11-26 20:45:31.676887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.818 [2024-11-26 20:45:31.676900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.818 [2024-11-26 20:45:31.676938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.818 [2024-11-26 20:45:31.676944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.818 [2024-11-26 20:45:31.676948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.818 [2024-11-26 20:45:31.676960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.676968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.818 [2024-11-26 20:45:31.676974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.818 [2024-11-26 20:45:31.676987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.818 [2024-11-26 20:45:31.677025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.818 [2024-11-26 20:45:31.677031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.818 [2024-11-26 20:45:31.677035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.677039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.818 [2024-11-26 20:45:31.677048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.677052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.677056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.818 [2024-11-26 20:45:31.677062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.818 [2024-11-26 20:45:31.677075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.818 [2024-11-26 20:45:31.677107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.818 [2024-11-26 20:45:31.677113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.818 [2024-11-26 20:45:31.677117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.677121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.818 [2024-11-26 20:45:31.677129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.677133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.677138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.818 [2024-11-26 20:45:31.677144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.818 [2024-11-26 20:45:31.681165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.818 [2024-11-26 20:45:31.681186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.818 [2024-11-26 20:45:31.681192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.818 [2024-11-26 20:45:31.681196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.681200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.818 [2024-11-26 20:45:31.681212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.681216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.681220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x107b750) 00:17:36.818 [2024-11-26 20:45:31.681227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.818 [2024-11-26 20:45:31.681246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dfbc0, cid 3, qid 0 00:17:36.818 [2024-11-26 20:45:31.681280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:36.818 [2024-11-26 20:45:31.681286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:36.818 [2024-11-26 20:45:31.681290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:36.818 [2024-11-26 20:45:31.681294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10dfbc0) on tqpair=0x107b750 00:17:36.818 [2024-11-26 20:45:31.681301] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:17:36.818 0% 00:17:36.818 Data Units Read: 0 00:17:36.818 Data Units Written: 0 00:17:36.818 Host Read Commands: 0 00:17:36.818 Host Write Commands: 0 00:17:36.818 Controller Busy Time: 0 minutes 00:17:36.818 Power Cycles: 0 00:17:36.818 Power On Hours: 0 hours 00:17:36.818 Unsafe Shutdowns: 0 00:17:36.818 Unrecoverable Media Errors: 0 00:17:36.818 Lifetime Error Log Entries: 0 00:17:36.818 Warning Temperature Time: 0 minutes 00:17:36.818 Critical Temperature Time: 0 minutes 00:17:36.818 00:17:36.818 Number of Queues 00:17:36.818 ================ 00:17:36.818 Number of I/O Submission Queues: 127 00:17:36.818 Number of I/O Completion Queues: 127 00:17:36.818 00:17:36.818 Active Namespaces 00:17:36.818 ================= 00:17:36.818 Namespace ID:1 00:17:36.818 Error Recovery Timeout: Unlimited 00:17:36.818 Command Set Identifier: NVM (00h) 00:17:36.818 Deallocate: Supported 00:17:36.818 Deallocated/Unwritten Error: Not Supported 00:17:36.818 Deallocated Read Value: Unknown 00:17:36.818 Deallocate in Write Zeroes: Not Supported 00:17:36.818 Deallocated Guard Field: 0xFFFF 00:17:36.818 Flush: Supported 00:17:36.818 Reservation: Supported 00:17:36.818 Namespace Sharing Capabilities: Multiple Controllers 00:17:36.818 Size (in LBAs): 131072 (0GiB) 00:17:36.818 Capacity (in LBAs): 131072 (0GiB) 00:17:36.818 Utilization (in LBAs): 131072 (0GiB) 00:17:36.818 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:36.818 EUI64: ABCDEF0123456789 00:17:36.818 UUID: 7b9168d4-357b-48fe-8173-db87e3420a38 00:17:36.818 Thin Provisioning: Not Supported 00:17:36.818 Per-NS Atomic Units: Yes 00:17:36.818 Atomic Boundary Size (Normal): 0 00:17:36.818 Atomic Boundary Size (PFail): 0 00:17:36.818 Atomic Boundary Offset: 0 00:17:36.818 Maximum Single Source Range Length: 65535 00:17:36.818 Maximum Copy Length: 65535 00:17:36.818 Maximum Source Range Count: 1 00:17:36.818 NGUID/EUI64 Never Reused: No 00:17:36.818 Namespace Write Protected: No 00:17:36.818 Number of LBA Formats: 1 00:17:36.818 Current LBA Format: LBA Format #00 00:17:36.818 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:36.818 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.818 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.818 rmmod nvme_tcp 00:17:36.818 rmmod nvme_fabrics 00:17:37.078 rmmod nvme_keyring 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74647 ']' 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74647 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74647 ']' 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74647 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74647 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.078 killing process with pid 74647 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74647' 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74647 00:17:37.078 20:45:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74647 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:37.337 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:37.595 00:17:37.595 real 0m3.106s 00:17:37.595 user 0m7.333s 00:17:37.595 sys 0m0.958s 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:37.595 ************************************ 00:17:37.595 END TEST nvmf_identify 00:17:37.595 ************************************ 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.595 ************************************ 00:17:37.595 START TEST nvmf_perf 00:17:37.595 ************************************ 00:17:37.595 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:37.854 * Looking for test storage... 00:17:37.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.854 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.855 --rc genhtml_branch_coverage=1 00:17:37.855 --rc genhtml_function_coverage=1 00:17:37.855 --rc genhtml_legend=1 00:17:37.855 --rc geninfo_all_blocks=1 00:17:37.855 --rc geninfo_unexecuted_blocks=1 00:17:37.855 00:17:37.855 ' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.855 --rc genhtml_branch_coverage=1 00:17:37.855 --rc genhtml_function_coverage=1 00:17:37.855 --rc genhtml_legend=1 00:17:37.855 --rc geninfo_all_blocks=1 00:17:37.855 --rc geninfo_unexecuted_blocks=1 00:17:37.855 00:17:37.855 ' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.855 --rc genhtml_branch_coverage=1 00:17:37.855 --rc genhtml_function_coverage=1 00:17:37.855 --rc genhtml_legend=1 00:17:37.855 --rc geninfo_all_blocks=1 00:17:37.855 --rc geninfo_unexecuted_blocks=1 00:17:37.855 00:17:37.855 ' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.855 --rc genhtml_branch_coverage=1 00:17:37.855 --rc genhtml_function_coverage=1 00:17:37.855 --rc genhtml_legend=1 00:17:37.855 --rc geninfo_all_blocks=1 00:17:37.855 --rc geninfo_unexecuted_blocks=1 00:17:37.855 00:17:37.855 ' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.855 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.855 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:37.856 Cannot find device "nvmf_init_br" 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:37.856 Cannot find device "nvmf_init_br2" 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:37.856 Cannot find device "nvmf_tgt_br" 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.856 Cannot find device "nvmf_tgt_br2" 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:37.856 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:37.856 Cannot find device "nvmf_init_br" 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:38.114 Cannot find device "nvmf_init_br2" 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:38.114 Cannot find device "nvmf_tgt_br" 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:38.114 Cannot find device "nvmf_tgt_br2" 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:38.114 Cannot find device "nvmf_br" 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:38.114 Cannot find device "nvmf_init_if" 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:38.114 Cannot find device "nvmf_init_if2" 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.114 20:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.114 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.114 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.115 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:38.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:17:38.373 00:17:38.373 --- 10.0.0.3 ping statistics --- 00:17:38.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.373 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:38.373 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:38.373 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:17:38.373 00:17:38.373 --- 10.0.0.4 ping statistics --- 00:17:38.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.373 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:38.373 00:17:38.373 --- 10.0.0.1 ping statistics --- 00:17:38.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.373 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:38.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:17:38.373 00:17:38.373 --- 10.0.0.2 ping statistics --- 00:17:38.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.373 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74916 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74916 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74916 ']' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.373 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:38.373 [2024-11-26 20:45:33.311256] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:38.373 [2024-11-26 20:45:33.311360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.631 [2024-11-26 20:45:33.473776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.631 [2024-11-26 20:45:33.552383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.631 [2024-11-26 20:45:33.552437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.631 [2024-11-26 20:45:33.552452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.631 [2024-11-26 20:45:33.552466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.631 [2024-11-26 20:45:33.552477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.631 [2024-11-26 20:45:33.553987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.631 [2024-11-26 20:45:33.554034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.631 [2024-11-26 20:45:33.554132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.631 [2024-11-26 20:45:33.554133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.890 [2024-11-26 20:45:33.639051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:38.890 20:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:39.456 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:39.456 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:39.714 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:39.714 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:39.973 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:39.973 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:39.973 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:39.973 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:39.973 20:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:40.231 [2024-11-26 20:45:35.034442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.231 20:45:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.488 20:45:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:40.488 20:45:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.747 20:45:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:40.747 20:45:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:41.005 20:45:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:41.264 [2024-11-26 20:45:36.217211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.264 20:45:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:41.523 20:45:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:41.523 20:45:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:41.523 20:45:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:41.523 20:45:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:42.899 Initializing NVMe Controllers 00:17:42.899 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:42.899 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:42.899 Initialization complete. Launching workers. 00:17:42.899 ======================================================== 00:17:42.899 Latency(us) 00:17:42.899 Device Information : IOPS MiB/s Average min max 00:17:42.899 PCIE (0000:00:10.0) NSID 1 from core 0: 23712.00 92.62 1349.24 361.21 8166.90 00:17:42.899 ======================================================== 00:17:42.899 Total : 23712.00 92.62 1349.24 361.21 8166.90 00:17:42.899 00:17:42.900 20:45:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:44.276 Initializing NVMe Controllers 00:17:44.276 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.276 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:44.276 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:44.276 Initialization complete. Launching workers. 00:17:44.276 ======================================================== 00:17:44.276 Latency(us) 00:17:44.277 Device Information : IOPS MiB/s Average min max 00:17:44.277 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4470.04 17.46 223.44 82.91 4208.94 00:17:44.277 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.75 0.48 8144.46 5018.32 12028.69 00:17:44.277 ======================================================== 00:17:44.277 Total : 4593.79 17.94 436.82 82.91 12028.69 00:17:44.277 00:17:44.277 20:45:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:45.650 Initializing NVMe Controllers 00:17:45.650 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.650 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:45.650 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:45.650 Initialization complete. Launching workers. 00:17:45.650 ======================================================== 00:17:45.651 Latency(us) 00:17:45.651 Device Information : IOPS MiB/s Average min max 00:17:45.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9093.95 35.52 3535.40 494.13 8247.62 00:17:45.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3965.98 15.49 8106.74 5490.64 16256.06 00:17:45.651 ======================================================== 00:17:45.651 Total : 13059.92 51.02 4923.60 494.13 16256.06 00:17:45.651 00:17:45.651 20:45:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:45.651 20:45:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:48.183 Initializing NVMe Controllers 00:17:48.183 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:48.183 Controller IO queue size 128, less than required. 00:17:48.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.183 Controller IO queue size 128, less than required. 00:17:48.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.183 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:48.183 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:48.183 Initialization complete. Launching workers. 00:17:48.183 ======================================================== 00:17:48.183 Latency(us) 00:17:48.183 Device Information : IOPS MiB/s Average min max 00:17:48.183 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1751.23 437.81 74477.49 34992.57 114563.74 00:17:48.183 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 667.90 166.97 201467.35 70881.64 312833.05 00:17:48.183 ======================================================== 00:17:48.183 Total : 2419.13 604.78 109538.14 34992.57 312833.05 00:17:48.183 00:17:48.183 20:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:48.441 Initializing NVMe Controllers 00:17:48.441 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:48.441 Controller IO queue size 128, less than required. 00:17:48.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.441 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:48.441 Controller IO queue size 128, less than required. 00:17:48.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:48.441 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:48.441 WARNING: Some requested NVMe devices were skipped 00:17:48.441 No valid NVMe controllers or AIO or URING devices found 00:17:48.441 20:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:50.974 Initializing NVMe Controllers 00:17:50.974 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.974 Controller IO queue size 128, less than required. 00:17:50.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.974 Controller IO queue size 128, less than required. 00:17:50.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.974 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:50.974 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:50.974 Initialization complete. Launching workers. 00:17:50.974 00:17:50.974 ==================== 00:17:50.974 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:50.974 TCP transport: 00:17:50.974 polls: 10301 00:17:50.974 idle_polls: 6409 00:17:50.974 sock_completions: 3892 00:17:50.974 nvme_completions: 6879 00:17:50.974 submitted_requests: 10304 00:17:50.974 queued_requests: 1 00:17:50.974 00:17:50.974 ==================== 00:17:50.974 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:50.974 TCP transport: 00:17:50.974 polls: 10382 00:17:50.974 idle_polls: 5247 00:17:50.974 sock_completions: 5135 00:17:50.974 nvme_completions: 7117 00:17:50.974 submitted_requests: 10662 00:17:50.974 queued_requests: 1 00:17:50.974 ======================================================== 00:17:50.974 Latency(us) 00:17:50.974 Device Information : IOPS MiB/s Average min max 00:17:50.974 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1716.32 429.08 76195.34 29828.57 161224.12 00:17:50.974 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1775.71 443.93 72379.95 23636.63 137533.50 00:17:50.974 ======================================================== 00:17:50.974 Total : 3492.03 873.01 74255.20 23636.63 161224.12 00:17:50.974 00:17:50.974 20:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:50.975 20:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.233 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:51.233 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:51.233 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:51.233 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.233 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:51.234 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.234 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:51.234 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.234 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.234 rmmod nvme_tcp 00:17:51.492 rmmod nvme_fabrics 00:17:51.492 rmmod nvme_keyring 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74916 ']' 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74916 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74916 ']' 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74916 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74916 00:17:51.492 killing process with pid 74916 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74916' 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74916 00:17:51.492 20:45:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74916 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:52.425 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:52.683 ************************************ 00:17:52.683 END TEST nvmf_perf 00:17:52.683 ************************************ 00:17:52.683 00:17:52.683 real 0m15.021s 00:17:52.683 user 0m53.347s 00:17:52.683 sys 0m4.848s 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.683 ************************************ 00:17:52.683 START TEST nvmf_fio_host 00:17:52.683 ************************************ 00:17:52.683 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:52.942 * Looking for test storage... 00:17:52.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.942 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.943 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.944 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.944 Cannot find device "nvmf_init_br" 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.944 Cannot find device "nvmf_init_br2" 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.944 Cannot find device "nvmf_tgt_br" 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.944 Cannot find device "nvmf_tgt_br2" 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.944 Cannot find device "nvmf_init_br" 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:52.944 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:53.207 Cannot find device "nvmf_init_br2" 00:17:53.207 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:53.208 Cannot find device "nvmf_tgt_br" 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:53.208 Cannot find device "nvmf_tgt_br2" 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:53.208 Cannot find device "nvmf_br" 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:53.208 Cannot find device "nvmf_init_if" 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:53.208 20:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:53.208 Cannot find device "nvmf_init_if2" 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.208 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:53.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:53.481 00:17:53.481 --- 10.0.0.3 ping statistics --- 00:17:53.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.481 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:53.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:53.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:17:53.481 00:17:53.481 --- 10.0.0.4 ping statistics --- 00:17:53.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.481 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:53.481 00:17:53.481 --- 10.0.0.1 ping statistics --- 00:17:53.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.481 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:53.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:53.481 00:17:53.481 --- 10.0.0.2 ping statistics --- 00:17:53.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.481 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75383 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75383 00:17:53.481 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75383 ']' 00:17:53.482 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.482 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.482 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.482 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.482 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.482 [2024-11-26 20:45:48.408627] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:53.482 [2024-11-26 20:45:48.408933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.740 [2024-11-26 20:45:48.569957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.740 [2024-11-26 20:45:48.648372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.740 [2024-11-26 20:45:48.648442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.740 [2024-11-26 20:45:48.648458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.740 [2024-11-26 20:45:48.648471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.740 [2024-11-26 20:45:48.648483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.740 [2024-11-26 20:45:48.650216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.740 [2024-11-26 20:45:48.650040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.740 [2024-11-26 20:45:48.650210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.740 [2024-11-26 20:45:48.650133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.997 [2024-11-26 20:45:48.738300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:53.997 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.997 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:53.997 20:45:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:54.255 [2024-11-26 20:45:49.149832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.255 20:45:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:54.255 20:45:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.255 20:45:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.255 20:45:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:54.820 Malloc1 00:17:54.820 20:45:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.078 20:45:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:55.336 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:55.594 [2024-11-26 20:45:50.448393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:55.594 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:55.851 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:55.852 20:45:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:56.110 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:56.110 fio-3.35 00:17:56.110 Starting 1 thread 00:17:58.640 00:17:58.640 test: (groupid=0, jobs=1): err= 0: pid=75460: Tue Nov 26 20:45:53 2024 00:17:58.640 read: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.4MiB/2005msec) 00:17:58.640 slat (nsec): min=1601, max=331564, avg=1982.70, stdev=2963.20 00:17:58.640 clat (usec): min=2527, max=11189, avg=6345.95, stdev=684.82 00:17:58.640 lat (usec): min=2566, max=11191, avg=6347.94, stdev=684.91 00:17:58.640 clat percentiles (usec): 00:17:58.640 | 1.00th=[ 5145], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:17:58.640 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6456], 00:17:58.640 | 70.00th=[ 6652], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7504], 00:17:58.640 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[10028], 99.95th=[10159], 00:17:58.640 | 99.99th=[11076] 00:17:58.640 bw ( KiB/s): min=37072, max=45640, per=99.93%, avg=42036.00, stdev=3651.79, samples=4 00:17:58.640 iops : min= 9268, max=11410, avg=10509.00, stdev=912.95, samples=4 00:17:58.640 write: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.3MiB/2005msec); 0 zone resets 00:17:58.640 slat (nsec): min=1644, max=263679, avg=2020.99, stdev=2012.48 00:17:58.640 clat (usec): min=2398, max=10645, avg=5782.26, stdev=639.95 00:17:58.640 lat (usec): min=2411, max=10647, avg=5784.28, stdev=640.13 00:17:58.640 clat percentiles (usec): 00:17:58.640 | 1.00th=[ 4686], 5.00th=[ 4948], 10.00th=[ 5080], 20.00th=[ 5276], 00:17:58.640 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5866], 00:17:58.640 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6587], 95.00th=[ 6783], 00:17:58.640 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 9765], 99.95th=[10421], 00:17:58.640 | 99.99th=[10552] 00:17:58.640 bw ( KiB/s): min=37856, max=45584, per=99.96%, avg=42034.00, stdev=3365.33, samples=4 00:17:58.640 iops : min= 9464, max=11396, avg=10508.50, stdev=841.33, samples=4 00:17:58.640 lat (msec) : 4=0.08%, 10=99.83%, 20=0.09% 00:17:58.640 cpu : usr=68.56%, sys=24.80%, ctx=15, majf=0, minf=7 00:17:58.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:58.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:58.640 issued rwts: total=21085,21077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:58.640 00:17:58.640 Run status group 0 (all jobs): 00:17:58.640 READ: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.4MiB (86.4MB), run=2005-2005msec 00:17:58.640 WRITE: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.3MiB (86.3MB), run=2005-2005msec 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:58.640 20:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:58.640 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:58.640 fio-3.35 00:17:58.640 Starting 1 thread 00:18:01.172 00:18:01.172 test: (groupid=0, jobs=1): err= 0: pid=75503: Tue Nov 26 20:45:55 2024 00:18:01.172 read: IOPS=9103, BW=142MiB/s (149MB/s)(285MiB/2007msec) 00:18:01.172 slat (usec): min=2, max=115, avg= 3.30, stdev= 1.96 00:18:01.172 clat (usec): min=2174, max=16428, avg=7848.87, stdev=2245.65 00:18:01.172 lat (usec): min=2176, max=16431, avg=7852.16, stdev=2245.89 00:18:01.172 clat percentiles (usec): 00:18:01.172 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5866], 00:18:01.172 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 7635], 60.00th=[ 8225], 00:18:01.172 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11863], 00:18:01.172 | 99.00th=[13829], 99.50th=[14222], 99.90th=[15533], 99.95th=[16057], 00:18:01.172 | 99.99th=[16319] 00:18:01.172 bw ( KiB/s): min=63968, max=80544, per=48.97%, avg=71320.00, stdev=7824.09, samples=4 00:18:01.172 iops : min= 3998, max= 5034, avg=4457.50, stdev=489.01, samples=4 00:18:01.172 write: IOPS=5293, BW=82.7MiB/s (86.7MB/s)(146MiB/1768msec); 0 zone resets 00:18:01.172 slat (usec): min=29, max=415, avg=36.38, stdev=10.01 00:18:01.172 clat (usec): min=3764, max=18829, avg=11119.53, stdev=2220.71 00:18:01.172 lat (usec): min=3799, max=18865, avg=11155.91, stdev=2224.03 00:18:01.172 clat percentiles (usec): 00:18:01.172 | 1.00th=[ 6915], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9241], 00:18:01.172 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10945], 60.00th=[11600], 00:18:01.172 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14222], 95.00th=[15270], 00:18:01.172 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17957], 99.95th=[18220], 00:18:01.172 | 99.99th=[18744] 00:18:01.172 bw ( KiB/s): min=67360, max=83776, per=87.78%, avg=74344.00, stdev=7777.14, samples=4 00:18:01.172 iops : min= 4210, max= 5236, avg=4646.50, stdev=486.07, samples=4 00:18:01.172 lat (msec) : 4=1.72%, 10=65.04%, 20=33.24% 00:18:01.172 cpu : usr=82.00%, sys=14.11%, ctx=5, majf=0, minf=8 00:18:01.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.172 issued rwts: total=18270,9359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.172 00:18:01.172 Run status group 0 (all jobs): 00:18:01.172 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (299MB), run=2007-2007msec 00:18:01.172 WRITE: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=146MiB (153MB), run=1768-1768msec 00:18:01.172 20:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.430 rmmod nvme_tcp 00:18:01.430 rmmod nvme_fabrics 00:18:01.430 rmmod nvme_keyring 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75383 ']' 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75383 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75383 ']' 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75383 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75383 00:18:01.430 killing process with pid 75383 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75383' 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75383 00:18:01.430 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75383 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.998 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.256 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:02.256 ************************************ 00:18:02.256 END TEST nvmf_fio_host 00:18:02.256 ************************************ 00:18:02.256 00:18:02.256 real 0m9.395s 00:18:02.256 user 0m36.525s 00:18:02.256 sys 0m2.940s 00:18:02.256 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.256 20:45:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 ************************************ 00:18:02.256 START TEST nvmf_failover 00:18:02.256 ************************************ 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:02.256 * Looking for test storage... 00:18:02.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.256 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.257 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.257 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:02.515 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.516 --rc genhtml_branch_coverage=1 00:18:02.516 --rc genhtml_function_coverage=1 00:18:02.516 --rc genhtml_legend=1 00:18:02.516 --rc geninfo_all_blocks=1 00:18:02.516 --rc geninfo_unexecuted_blocks=1 00:18:02.516 00:18:02.516 ' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.516 --rc genhtml_branch_coverage=1 00:18:02.516 --rc genhtml_function_coverage=1 00:18:02.516 --rc genhtml_legend=1 00:18:02.516 --rc geninfo_all_blocks=1 00:18:02.516 --rc geninfo_unexecuted_blocks=1 00:18:02.516 00:18:02.516 ' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.516 --rc genhtml_branch_coverage=1 00:18:02.516 --rc genhtml_function_coverage=1 00:18:02.516 --rc genhtml_legend=1 00:18:02.516 --rc geninfo_all_blocks=1 00:18:02.516 --rc geninfo_unexecuted_blocks=1 00:18:02.516 00:18:02.516 ' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.516 --rc genhtml_branch_coverage=1 00:18:02.516 --rc genhtml_function_coverage=1 00:18:02.516 --rc genhtml_legend=1 00:18:02.516 --rc geninfo_all_blocks=1 00:18:02.516 --rc geninfo_unexecuted_blocks=1 00:18:02.516 00:18:02.516 ' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.516 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.516 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:02.517 Cannot find device "nvmf_init_br" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:02.517 Cannot find device "nvmf_init_br2" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:02.517 Cannot find device "nvmf_tgt_br" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.517 Cannot find device "nvmf_tgt_br2" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:02.517 Cannot find device "nvmf_init_br" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:02.517 Cannot find device "nvmf_init_br2" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:02.517 Cannot find device "nvmf_tgt_br" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:02.517 Cannot find device "nvmf_tgt_br2" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:02.517 Cannot find device "nvmf_br" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:02.517 Cannot find device "nvmf_init_if" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:02.517 Cannot find device "nvmf_init_if2" 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:02.517 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:02.775 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.775 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 00:18:02.775 00:18:02.775 --- 10.0.0.3 ping statistics --- 00:18:02.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.775 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:18:02.775 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:03.062 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:03.062 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:18:03.062 00:18:03.062 --- 10.0.0.4 ping statistics --- 00:18:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.062 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:03.062 00:18:03.062 --- 10.0.0.1 ping statistics --- 00:18:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.062 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:03.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:18:03.062 00:18:03.062 --- 10.0.0.2 ping statistics --- 00:18:03.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.062 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75775 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75775 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75775 ']' 00:18:03.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.062 20:45:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:03.062 [2024-11-26 20:45:57.865782] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:03.062 [2024-11-26 20:45:57.866069] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.062 [2024-11-26 20:45:58.010138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.323 [2024-11-26 20:45:58.077665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.323 [2024-11-26 20:45:58.077905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.323 [2024-11-26 20:45:58.078069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.323 [2024-11-26 20:45:58.078212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.323 [2024-11-26 20:45:58.078292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.323 [2024-11-26 20:45:58.079360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.323 [2024-11-26 20:45:58.079419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.323 [2024-11-26 20:45:58.079420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.323 [2024-11-26 20:45:58.161476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.323 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.323 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:03.323 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.323 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.323 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:03.323 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.323 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:03.888 [2024-11-26 20:45:58.642631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.888 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:04.146 Malloc0 00:18:04.146 20:45:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.403 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:04.660 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:04.918 [2024-11-26 20:45:59.873073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:04.918 20:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:05.176 [2024-11-26 20:46:00.161360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:05.433 [2024-11-26 20:46:00.393525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75832 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75832 /var/tmp/bdevperf.sock 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75832 ']' 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.433 20:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:06.364 20:46:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.364 20:46:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:06.364 20:46:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:06.930 NVMe0n1 00:18:06.930 20:46:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:07.188 00:18:07.188 20:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75850 00:18:07.188 20:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:07.188 20:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:08.123 20:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:08.381 20:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:11.664 20:46:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:11.922 00:18:11.922 20:46:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:12.180 20:46:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:15.498 20:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:15.498 [2024-11-26 20:46:10.244463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:15.498 20:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:16.434 20:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:16.692 20:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75850 00:18:23.275 { 00:18:23.275 "results": [ 00:18:23.275 { 00:18:23.275 "job": "NVMe0n1", 00:18:23.275 "core_mask": "0x1", 00:18:23.275 "workload": "verify", 00:18:23.275 "status": "finished", 00:18:23.275 "verify_range": { 00:18:23.275 "start": 0, 00:18:23.275 "length": 16384 00:18:23.275 }, 00:18:23.275 "queue_depth": 128, 00:18:23.275 "io_size": 4096, 00:18:23.275 "runtime": 15.011195, 00:18:23.275 "iops": 10190.594419698098, 00:18:23.275 "mibps": 39.807009451945696, 00:18:23.275 "io_failed": 4117, 00:18:23.275 "io_timeout": 0, 00:18:23.275 "avg_latency_us": 12205.267818933036, 00:18:23.275 "min_latency_us": 475.9161904761905, 00:18:23.275 "max_latency_us": 16103.131428571429 00:18:23.275 } 00:18:23.275 ], 00:18:23.275 "core_count": 1 00:18:23.275 } 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75832 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75832 ']' 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75832 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75832 00:18:23.275 killing process with pid 75832 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75832' 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75832 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75832 00:18:23.275 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:23.275 [2024-11-26 20:46:00.449789] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:23.275 [2024-11-26 20:46:00.449891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75832 ] 00:18:23.275 [2024-11-26 20:46:00.596804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.275 [2024-11-26 20:46:00.661433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.275 [2024-11-26 20:46:00.734522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:23.275 Running I/O for 15 seconds... 00:18:23.275 11104.00 IOPS, 43.38 MiB/s [2024-11-26T20:46:18.268Z] [2024-11-26 20:46:03.277172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.275 [2024-11-26 20:46:03.277255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.275 [2024-11-26 20:46:03.277278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.275 [2024-11-26 20:46:03.277293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.275 [2024-11-26 20:46:03.277309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.275 [2024-11-26 20:46:03.277323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.275 [2024-11-26 20:46:03.277338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.275 [2024-11-26 20:46:03.277351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.275 [2024-11-26 20:46:03.277366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.275 [2024-11-26 20:46:03.277379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.275 [2024-11-26 20:46:03.277399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.275 [2024-11-26 20:46:03.277412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.275 [2024-11-26 20:46:03.277427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.276 [2024-11-26 20:46:03.277669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.277974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.277988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.276 [2024-11-26 20:46:03.278207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.276 [2024-11-26 20:46:03.278220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.277 [2024-11-26 20:46:03.278590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.278982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.277 [2024-11-26 20:46:03.278995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.277 [2024-11-26 20:46:03.279010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.278 [2024-11-26 20:46:03.279287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.278 [2024-11-26 20:46:03.279786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.278 [2024-11-26 20:46:03.279800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.279815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.279830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.279843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.279857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.279870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.279885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.279898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.279912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.279925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.279940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.279952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.279967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.279980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.279996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.279 [2024-11-26 20:46:03.280332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.279 [2024-11-26 20:46:03.280550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.279 [2024-11-26 20:46:03.280563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:03.280590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:03.280618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:03.280646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:03.280674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:03.280701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:03.280732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.280 [2024-11-26 20:46:03.280760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.280 [2024-11-26 20:46:03.280788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.280 [2024-11-26 20:46:03.280816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.280 [2024-11-26 20:46:03.280848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.280 [2024-11-26 20:46:03.280876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.280 [2024-11-26 20:46:03.280904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.280 [2024-11-26 20:46:03.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.280976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.280 [2024-11-26 20:46:03.280988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.280 [2024-11-26 20:46:03.280998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105272 len:8 PRP1 0x0 PRP2 0x0 00:18:23.280 [2024-11-26 20:46:03.281012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.281080] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:23.280 [2024-11-26 20:46:03.281131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.280 [2024-11-26 20:46:03.281146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.281172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.280 [2024-11-26 20:46:03.281186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.281200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.280 [2024-11-26 20:46:03.281214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.281227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.280 [2024-11-26 20:46:03.281240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:03.281254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:23.280 [2024-11-26 20:46:03.281289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x605c60 (9): Bad file descriptor 00:18:23.280 [2024-11-26 20:46:03.284102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:23.280 [2024-11-26 20:46:03.310713] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:23.280 11136.00 IOPS, 43.50 MiB/s [2024-11-26T20:46:18.273Z] 11250.00 IOPS, 43.95 MiB/s [2024-11-26T20:46:18.273Z] 11257.25 IOPS, 43.97 MiB/s [2024-11-26T20:46:18.273Z] [2024-11-26 20:46:06.992868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:06.992958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:06.993016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:06.993033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:06.993050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:06.993064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:06.993080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:06.993095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:06.993111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:06.993125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:06.993141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.280 [2024-11-26 20:46:06.993167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.280 [2024-11-26 20:46:06.993183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.281 [2024-11-26 20:46:06.993541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.281 [2024-11-26 20:46:06.993955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.281 [2024-11-26 20:46:06.993976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.993994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.282 [2024-11-26 20:46:06.994945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.994962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.994977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.995005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.995019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.995035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.995061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.995075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.995088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.995103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.995116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.995130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.995143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.995158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.995171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.282 [2024-11-26 20:46:06.995194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.282 [2024-11-26 20:46:06.995208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.995957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.995979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.996024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.996058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.283 [2024-11-26 20:46:06.996091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.283 [2024-11-26 20:46:06.996653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.283 [2024-11-26 20:46:06.996669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.284 [2024-11-26 20:46:06.996701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.284 [2024-11-26 20:46:06.996735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.284 [2024-11-26 20:46:06.996768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.284 [2024-11-26 20:46:06.996801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.284 [2024-11-26 20:46:06.996834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.284 [2024-11-26 20:46:06.996867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.284 [2024-11-26 20:46:06.996901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x679370 is same with the state(6) to be set 00:18:23.284 [2024-11-26 20:46:06.996937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.996949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.996961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47320 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.996977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.996994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47712 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47720 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47728 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47736 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47744 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47752 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47760 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47768 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47776 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47784 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47792 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47800 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47808 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47816 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47824 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.284 [2024-11-26 20:46:06.997825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.284 [2024-11-26 20:46:06.997834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47832 len:8 PRP1 0x0 PRP2 0x0 00:18:23.284 [2024-11-26 20:46:06.997847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.997903] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:23.284 [2024-11-26 20:46:06.997966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.284 [2024-11-26 20:46:06.997995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.998010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.284 [2024-11-26 20:46:06.998023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.284 [2024-11-26 20:46:06.998037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.284 [2024-11-26 20:46:06.998050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:06.998064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.285 [2024-11-26 20:46:06.998077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:06.998090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:23.285 [2024-11-26 20:46:06.998124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x605c60 (9): Bad file descriptor 00:18:23.285 [2024-11-26 20:46:07.001623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:23.285 [2024-11-26 20:46:07.026983] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:23.285 10854.80 IOPS, 42.40 MiB/s [2024-11-26T20:46:18.278Z] 10621.00 IOPS, 41.49 MiB/s [2024-11-26T20:46:18.278Z] 10463.71 IOPS, 40.87 MiB/s [2024-11-26T20:46:18.278Z] 10348.75 IOPS, 40.42 MiB/s [2024-11-26T20:46:18.278Z] 10256.44 IOPS, 40.06 MiB/s [2024-11-26T20:46:18.278Z] [2024-11-26 20:46:11.498504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.498572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.498610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.498641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.498669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.498698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.498726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.498971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.498985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.285 [2024-11-26 20:46:11.499543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.285 [2024-11-26 20:46:11.499652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-11-26 20:46:11.499666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.499694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.499722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.499752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.499780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.499812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.499844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.499872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.499901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.499936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.499965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.499980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.499994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.286 [2024-11-26 20:46:11.500478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.286 [2024-11-26 20:46:11.500701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-11-26 20:46:11.500715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.500977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.500992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.501006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.501036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.501072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.501101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.501130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.501167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.287 [2024-11-26 20:46:11.501196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.287 [2024-11-26 20:46:11.501900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-11-26 20:46:11.501913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.501928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.501945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.501961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.501975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.501990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.502004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.502033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.502062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.502093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.502122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.502152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:23.288 [2024-11-26 20:46:11.502187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-11-26 20:46:11.502221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-11-26 20:46:11.502248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-11-26 20:46:11.502276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-11-26 20:46:11.502303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-11-26 20:46:11.502331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-11-26 20:46:11.502358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-11-26 20:46:11.502386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6759f0 is same with the state(6) to be set 00:18:23.288 [2024-11-26 20:46:11.502418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:23.288 [2024-11-26 20:46:11.502428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:23.288 [2024-11-26 20:46:11.502438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7896 len:8 PRP1 0x0 PRP2 0x0 00:18:23.288 [2024-11-26 20:46:11.502451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502506] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:23.288 [2024-11-26 20:46:11.502558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.288 [2024-11-26 20:46:11.502574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.288 [2024-11-26 20:46:11.502602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.288 [2024-11-26 20:46:11.502629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.288 [2024-11-26 20:46:11.502667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.288 [2024-11-26 20:46:11.502681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:23.288 [2024-11-26 20:46:11.505513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:23.288 [2024-11-26 20:46:11.505558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x605c60 (9): Bad file descriptor 00:18:23.288 [2024-11-26 20:46:11.532708] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:23.288 10319.70 IOPS, 40.31 MiB/s [2024-11-26T20:46:18.281Z] 10428.09 IOPS, 40.73 MiB/s [2024-11-26T20:46:18.281Z] 10359.75 IOPS, 40.47 MiB/s [2024-11-26T20:46:18.281Z] 10291.46 IOPS, 40.20 MiB/s [2024-11-26T20:46:18.281Z] 10240.93 IOPS, 40.00 MiB/s [2024-11-26T20:46:18.281Z] 10190.73 IOPS, 39.81 MiB/s 00:18:23.288 Latency(us) 00:18:23.288 [2024-11-26T20:46:18.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.288 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:23.288 Verification LBA range: start 0x0 length 0x4000 00:18:23.288 NVMe0n1 : 15.01 10190.59 39.81 274.26 0.00 12205.27 475.92 16103.13 00:18:23.288 [2024-11-26T20:46:18.281Z] =================================================================================================================== 00:18:23.288 [2024-11-26T20:46:18.281Z] Total : 10190.59 39.81 274.26 0.00 12205.27 475.92 16103.13 00:18:23.288 Received shutdown signal, test time was about 15.000000 seconds 00:18:23.288 00:18:23.288 Latency(us) 00:18:23.288 [2024-11-26T20:46:18.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.288 [2024-11-26T20:46:18.281Z] =================================================================================================================== 00:18:23.288 [2024-11-26T20:46:18.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76031 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76031 /var/tmp/bdevperf.sock 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76031 ']' 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:23.288 20:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:23.288 [2024-11-26 20:46:18.024478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:23.288 20:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:23.547 [2024-11-26 20:46:18.352849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:23.547 20:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:23.807 NVMe0n1 00:18:23.807 20:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:24.065 00:18:24.065 20:46:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:24.328 00:18:24.328 20:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.328 20:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:24.587 20:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:24.844 20:46:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:28.127 20:46:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.127 20:46:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:28.127 20:46:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76102 00:18:28.127 20:46:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.127 20:46:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76102 00:18:29.500 { 00:18:29.500 "results": [ 00:18:29.500 { 00:18:29.500 "job": "NVMe0n1", 00:18:29.500 "core_mask": "0x1", 00:18:29.500 "workload": "verify", 00:18:29.500 "status": "finished", 00:18:29.500 "verify_range": { 00:18:29.500 "start": 0, 00:18:29.500 "length": 16384 00:18:29.500 }, 00:18:29.500 "queue_depth": 128, 00:18:29.500 "io_size": 4096, 00:18:29.500 "runtime": 1.00587, 00:18:29.500 "iops": 7168.918448706096, 00:18:29.500 "mibps": 28.003587690258186, 00:18:29.500 "io_failed": 0, 00:18:29.500 "io_timeout": 0, 00:18:29.500 "avg_latency_us": 17773.453672233558, 00:18:29.500 "min_latency_us": 1107.8704761904762, 00:18:29.500 "max_latency_us": 16477.62285714286 00:18:29.500 } 00:18:29.500 ], 00:18:29.500 "core_count": 1 00:18:29.500 } 00:18:29.500 20:46:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:29.500 [2024-11-26 20:46:17.471977] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:29.500 [2024-11-26 20:46:17.472102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76031 ] 00:18:29.500 [2024-11-26 20:46:17.625925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.500 [2024-11-26 20:46:17.684371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.500 [2024-11-26 20:46:17.741665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.500 [2024-11-26 20:46:19.703598] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:29.500 [2024-11-26 20:46:19.703783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.500 [2024-11-26 20:46:19.703804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.500 [2024-11-26 20:46:19.703821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.500 [2024-11-26 20:46:19.703835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.500 [2024-11-26 20:46:19.703850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.500 [2024-11-26 20:46:19.703864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.500 [2024-11-26 20:46:19.703879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.500 [2024-11-26 20:46:19.703892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.500 [2024-11-26 20:46:19.703906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:29.500 [2024-11-26 20:46:19.703949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:29.500 [2024-11-26 20:46:19.703974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1235c60 (9): Bad file descriptor 00:18:29.500 [2024-11-26 20:46:19.706404] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:29.500 Running I/O for 1 seconds... 00:18:29.500 7068.00 IOPS, 27.61 MiB/s 00:18:29.500 Latency(us) 00:18:29.500 [2024-11-26T20:46:24.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.500 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:29.500 Verification LBA range: start 0x0 length 0x4000 00:18:29.501 NVMe0n1 : 1.01 7168.92 28.00 0.00 0.00 17773.45 1107.87 16477.62 00:18:29.501 [2024-11-26T20:46:24.494Z] =================================================================================================================== 00:18:29.501 [2024-11-26T20:46:24.494Z] Total : 7168.92 28.00 0.00 0.00 17773.45 1107.87 16477.62 00:18:29.501 20:46:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.501 20:46:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:29.759 20:46:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.016 20:46:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:30.016 20:46:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:30.275 20:46:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.533 20:46:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 76031 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76031 ']' 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76031 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76031 00:18:33.881 killing process with pid 76031 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76031' 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76031 00:18:33.881 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76031 00:18:34.139 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:34.139 20:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.399 rmmod nvme_tcp 00:18:34.399 rmmod nvme_fabrics 00:18:34.399 rmmod nvme_keyring 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75775 ']' 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75775 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75775 ']' 00:18:34.399 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75775 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75775 00:18:34.656 killing process with pid 75775 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75775' 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75775 00:18:34.656 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75775 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:34.914 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:35.174 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.174 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.174 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:35.174 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.174 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.174 20:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:35.174 00:18:35.174 real 0m32.969s 00:18:35.174 user 2m4.796s 00:18:35.174 sys 0m7.297s 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:35.174 ************************************ 00:18:35.174 END TEST nvmf_failover 00:18:35.174 ************************************ 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.174 ************************************ 00:18:35.174 START TEST nvmf_host_discovery 00:18:35.174 ************************************ 00:18:35.174 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:35.433 * Looking for test storage... 00:18:35.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:35.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.433 --rc genhtml_branch_coverage=1 00:18:35.433 --rc genhtml_function_coverage=1 00:18:35.433 --rc genhtml_legend=1 00:18:35.433 --rc geninfo_all_blocks=1 00:18:35.433 --rc geninfo_unexecuted_blocks=1 00:18:35.433 00:18:35.433 ' 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:35.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.433 --rc genhtml_branch_coverage=1 00:18:35.433 --rc genhtml_function_coverage=1 00:18:35.433 --rc genhtml_legend=1 00:18:35.433 --rc geninfo_all_blocks=1 00:18:35.433 --rc geninfo_unexecuted_blocks=1 00:18:35.433 00:18:35.433 ' 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:35.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.433 --rc genhtml_branch_coverage=1 00:18:35.433 --rc genhtml_function_coverage=1 00:18:35.433 --rc genhtml_legend=1 00:18:35.433 --rc geninfo_all_blocks=1 00:18:35.433 --rc geninfo_unexecuted_blocks=1 00:18:35.433 00:18:35.433 ' 00:18:35.433 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:35.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.433 --rc genhtml_branch_coverage=1 00:18:35.433 --rc genhtml_function_coverage=1 00:18:35.434 --rc genhtml_legend=1 00:18:35.434 --rc geninfo_all_blocks=1 00:18:35.434 --rc geninfo_unexecuted_blocks=1 00:18:35.434 00:18:35.434 ' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.434 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:35.434 Cannot find device "nvmf_init_br" 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:35.434 Cannot find device "nvmf_init_br2" 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:35.434 Cannot find device "nvmf_tgt_br" 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.434 Cannot find device "nvmf_tgt_br2" 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:35.434 Cannot find device "nvmf_init_br" 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:35.434 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:35.693 Cannot find device "nvmf_init_br2" 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:35.693 Cannot find device "nvmf_tgt_br" 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:35.693 Cannot find device "nvmf_tgt_br2" 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:35.693 Cannot find device "nvmf_br" 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:35.693 Cannot find device "nvmf_init_if" 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:35.693 Cannot find device "nvmf_init_if2" 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.693 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:35.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:18:35.953 00:18:35.953 --- 10.0.0.3 ping statistics --- 00:18:35.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.953 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:35.953 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:35.953 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:35.953 00:18:35.953 --- 10.0.0.4 ping statistics --- 00:18:35.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.953 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:18:35.953 00:18:35.953 --- 10.0.0.1 ping statistics --- 00:18:35.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.953 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:35.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:18:35.953 00:18:35.953 --- 10.0.0.2 ping statistics --- 00:18:35.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.953 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76430 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76430 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76430 ']' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.953 20:46:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.953 [2024-11-26 20:46:30.896426] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:35.953 [2024-11-26 20:46:30.896707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.212 [2024-11-26 20:46:31.047840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.212 [2024-11-26 20:46:31.115915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.212 [2024-11-26 20:46:31.115974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.212 [2024-11-26 20:46:31.115986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.212 [2024-11-26 20:46:31.115997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.212 [2024-11-26 20:46:31.116005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.212 [2024-11-26 20:46:31.116404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.212 [2024-11-26 20:46:31.199822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.470 [2024-11-26 20:46:31.353755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.470 [2024-11-26 20:46:31.361973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.470 null0 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.470 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.470 null1 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76459 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76459 /tmp/host.sock 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76459 ']' 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:36.471 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.471 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.471 [2024-11-26 20:46:31.451468] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:36.471 [2024-11-26 20:46:31.451782] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76459 ] 00:18:36.729 [2024-11-26 20:46:31.612805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.729 [2024-11-26 20:46:31.678439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.987 [2024-11-26 20:46:31.741580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.987 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.988 20:46:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.246 [2024-11-26 20:46:32.166160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.246 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:37.504 20:46:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:38.069 [2024-11-26 20:46:32.836914] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:38.069 [2024-11-26 20:46:32.836953] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:38.069 [2024-11-26 20:46:32.836974] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:38.069 [2024-11-26 20:46:32.842974] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:38.069 [2024-11-26 20:46:32.897399] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:38.069 [2024-11-26 20:46:32.898559] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x567e60:1 started. 00:18:38.069 [2024-11-26 20:46:32.900650] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:38.069 [2024-11-26 20:46:32.900683] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:38.069 [2024-11-26 20:46:32.905621] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x567e60 was disconnected and freed. delete nvme_qpair. 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.635 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:38.636 [2024-11-26 20:46:33.589016] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5762f0:1 started. 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.636 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.636 [2024-11-26 20:46:33.595742] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5762f0 was disconnected and freed. delete nvme_qpair. 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.894 [2024-11-26 20:46:33.699677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:38.894 [2024-11-26 20:46:33.700735] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:38.894 [2024-11-26 20:46:33.700764] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.894 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:38.895 [2024-11-26 20:46:33.706728] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.895 [2024-11-26 20:46:33.770110] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:38.895 [2024-11-26 20:46:33.770173] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:38.895 [2024-11-26 20:46:33.770183] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:38.895 [2024-11-26 20:46:33.770190] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.895 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.153 [2024-11-26 20:46:33.916907] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:39.153 [2024-11-26 20:46:33.916938] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:39.153 [2024-11-26 20:46:33.917529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.153 [2024-11-26 20:46:33.917556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.153 [2024-11-26 20:46:33.917568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.153 [2024-11-26 20:46:33.917577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.153 [2024-11-26 20:46:33.917587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.153 [2024-11-26 20:46:33.917596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.153 [2024-11-26 20:46:33.917606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.153 [2024-11-26 20:46:33.917614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.153 [2024-11-26 20:46:33.917623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x544240 is same with the state(6) to be set 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:39.153 [2024-11-26 20:46:33.922887] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:39.153 [2024-11-26 20:46:33.922913] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:39.153 [2024-11-26 20:46:33.922963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x544240 (9): Bad file descriptor 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.153 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.154 20:46:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:39.154 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.412 20:46:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.404 [2024-11-26 20:46:35.299003] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:40.404 [2024-11-26 20:46:35.299036] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:40.404 [2024-11-26 20:46:35.299050] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:40.404 [2024-11-26 20:46:35.305028] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:40.404 [2024-11-26 20:46:35.363370] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:40.404 [2024-11-26 20:46:35.364273] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x5746d0:1 started. 00:18:40.404 [2024-11-26 20:46:35.366357] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:40.404 [2024-11-26 20:46:35.366398] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.404 [2024-11-26 20:46:35.368041] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x5746d0 was disconnected and freed. delete nvme_qpair. 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.404 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.663 request: 00:18:40.663 { 00:18:40.663 "name": "nvme", 00:18:40.663 "trtype": "tcp", 00:18:40.663 "traddr": "10.0.0.3", 00:18:40.663 "adrfam": "ipv4", 00:18:40.663 "trsvcid": "8009", 00:18:40.663 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:40.663 "wait_for_attach": true, 00:18:40.663 "method": "bdev_nvme_start_discovery", 00:18:40.663 "req_id": 1 00:18:40.663 } 00:18:40.663 Got JSON-RPC error response 00:18:40.663 response: 00:18:40.663 { 00:18:40.663 "code": -17, 00:18:40.663 "message": "File exists" 00:18:40.663 } 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.663 request: 00:18:40.663 { 00:18:40.663 "name": "nvme_second", 00:18:40.663 "trtype": "tcp", 00:18:40.663 "traddr": "10.0.0.3", 00:18:40.663 "adrfam": "ipv4", 00:18:40.663 "trsvcid": "8009", 00:18:40.663 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:40.663 "wait_for_attach": true, 00:18:40.663 "method": "bdev_nvme_start_discovery", 00:18:40.663 "req_id": 1 00:18:40.663 } 00:18:40.663 Got JSON-RPC error response 00:18:40.663 response: 00:18:40.663 { 00:18:40.663 "code": -17, 00:18:40.663 "message": "File exists" 00:18:40.663 } 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:40.663 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.664 20:46:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.040 [2024-11-26 20:46:36.610697] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.040 [2024-11-26 20:46:36.610760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x572fd0 with addr=10.0.0.3, port=8010 00:18:42.040 [2024-11-26 20:46:36.610798] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:42.040 [2024-11-26 20:46:36.610809] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:42.040 [2024-11-26 20:46:36.610817] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:42.975 [2024-11-26 20:46:37.610750] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.975 [2024-11-26 20:46:37.610836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x572fd0 with addr=10.0.0.3, port=8010 00:18:42.975 [2024-11-26 20:46:37.610861] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:42.975 [2024-11-26 20:46:37.610872] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:42.975 [2024-11-26 20:46:37.610882] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:43.910 [2024-11-26 20:46:38.610577] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:43.910 request: 00:18:43.910 { 00:18:43.910 "name": "nvme_second", 00:18:43.910 "trtype": "tcp", 00:18:43.910 "traddr": "10.0.0.3", 00:18:43.910 "adrfam": "ipv4", 00:18:43.910 "trsvcid": "8010", 00:18:43.910 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:43.910 "wait_for_attach": false, 00:18:43.910 "attach_timeout_ms": 3000, 00:18:43.910 "method": "bdev_nvme_start_discovery", 00:18:43.910 "req_id": 1 00:18:43.910 } 00:18:43.910 Got JSON-RPC error response 00:18:43.910 response: 00:18:43.910 { 00:18:43.910 "code": -110, 00:18:43.910 "message": "Connection timed out" 00:18:43.910 } 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76459 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.910 rmmod nvme_tcp 00:18:43.910 rmmod nvme_fabrics 00:18:43.910 rmmod nvme_keyring 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76430 ']' 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76430 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76430 ']' 00:18:43.910 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76430 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76430 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.911 killing process with pid 76430 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76430' 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76430 00:18:43.911 20:46:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76430 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.169 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.428 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:44.687 00:18:44.687 real 0m9.356s 00:18:44.687 user 0m16.625s 00:18:44.687 sys 0m2.670s 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.687 ************************************ 00:18:44.687 END TEST nvmf_host_discovery 00:18:44.687 ************************************ 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.687 ************************************ 00:18:44.687 START TEST nvmf_host_multipath_status 00:18:44.687 ************************************ 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:44.687 * Looking for test storage... 00:18:44.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.687 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.947 --rc genhtml_branch_coverage=1 00:18:44.947 --rc genhtml_function_coverage=1 00:18:44.947 --rc genhtml_legend=1 00:18:44.947 --rc geninfo_all_blocks=1 00:18:44.947 --rc geninfo_unexecuted_blocks=1 00:18:44.947 00:18:44.947 ' 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.947 --rc genhtml_branch_coverage=1 00:18:44.947 --rc genhtml_function_coverage=1 00:18:44.947 --rc genhtml_legend=1 00:18:44.947 --rc geninfo_all_blocks=1 00:18:44.947 --rc geninfo_unexecuted_blocks=1 00:18:44.947 00:18:44.947 ' 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.947 --rc genhtml_branch_coverage=1 00:18:44.947 --rc genhtml_function_coverage=1 00:18:44.947 --rc genhtml_legend=1 00:18:44.947 --rc geninfo_all_blocks=1 00:18:44.947 --rc geninfo_unexecuted_blocks=1 00:18:44.947 00:18:44.947 ' 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.947 --rc genhtml_branch_coverage=1 00:18:44.947 --rc genhtml_function_coverage=1 00:18:44.947 --rc genhtml_legend=1 00:18:44.947 --rc geninfo_all_blocks=1 00:18:44.947 --rc geninfo_unexecuted_blocks=1 00:18:44.947 00:18:44.947 ' 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.947 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.948 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:44.948 Cannot find device "nvmf_init_br" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:44.948 Cannot find device "nvmf_init_br2" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:44.948 Cannot find device "nvmf_tgt_br" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.948 Cannot find device "nvmf_tgt_br2" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:44.948 Cannot find device "nvmf_init_br" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:44.948 Cannot find device "nvmf_init_br2" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:44.948 Cannot find device "nvmf_tgt_br" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:44.948 Cannot find device "nvmf_tgt_br2" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:44.948 Cannot find device "nvmf_br" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:44.948 Cannot find device "nvmf_init_if" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:44.948 Cannot find device "nvmf_init_if2" 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.948 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.206 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.206 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.206 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.206 20:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.206 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:18:45.464 00:18:45.464 --- 10.0.0.3 ping statistics --- 00:18:45.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.464 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.464 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.464 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:18:45.464 00:18:45.464 --- 10.0.0.4 ping statistics --- 00:18:45.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.464 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:45.464 00:18:45.464 --- 10.0.0.1 ping statistics --- 00:18:45.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.464 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:18:45.464 00:18:45.464 --- 10.0.0.2 ping statistics --- 00:18:45.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.464 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.464 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76946 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76946 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76946 ']' 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.465 20:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:45.465 [2024-11-26 20:46:40.311905] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:45.465 [2024-11-26 20:46:40.312007] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.724 [2024-11-26 20:46:40.469036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:45.724 [2024-11-26 20:46:40.540990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.724 [2024-11-26 20:46:40.541051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.724 [2024-11-26 20:46:40.541066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.724 [2024-11-26 20:46:40.541079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.724 [2024-11-26 20:46:40.541090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.724 [2024-11-26 20:46:40.542664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.724 [2024-11-26 20:46:40.542665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.724 [2024-11-26 20:46:40.634184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.659 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.659 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:46.659 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.659 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.659 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:46.659 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.659 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76946 00:18:46.660 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:46.660 [2024-11-26 20:46:41.647219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.918 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:47.176 Malloc0 00:18:47.176 20:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:47.433 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.692 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.951 [2024-11-26 20:46:42.727806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.951 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:48.210 [2024-11-26 20:46:42.959909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77006 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77006 /var/tmp/bdevperf.sock 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77006 ']' 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.210 20:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:49.189 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.189 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:49.189 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:49.447 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:49.705 Nvme0n1 00:18:49.705 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:50.296 Nvme0n1 00:18:50.296 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:50.296 20:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:52.212 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:52.212 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:52.470 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:52.728 20:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:53.662 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:53.662 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:53.662 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.662 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:53.920 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.920 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:53.920 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.920 20:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:54.179 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:54.179 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:54.179 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:54.179 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.438 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.438 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:54.438 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.438 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:54.696 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.696 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:54.696 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:54.696 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.954 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.954 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:54.954 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.954 20:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:55.212 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.212 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:55.212 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:55.470 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:55.727 20:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:57.103 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:57.103 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:57.103 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.103 20:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:57.103 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:57.103 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:57.103 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.103 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:57.668 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.668 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:57.668 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.668 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:57.926 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.926 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:57.926 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:57.926 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.185 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.185 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:58.185 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:58.185 20:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.443 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.443 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:58.443 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.443 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:58.701 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.701 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:58.701 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:58.959 20:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:59.217 20:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:00.151 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:00.151 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:00.151 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.151 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:00.409 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.409 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:00.409 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.409 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:00.976 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:00.976 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:00.976 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.976 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:01.235 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.235 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:01.235 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.235 20:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:01.492 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.492 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:01.492 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.492 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:01.492 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.492 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:01.492 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.750 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:01.750 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.750 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:01.750 20:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:02.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:02.317 20:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.691 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.257 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.257 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.257 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.257 20:46:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.514 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.514 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:04.514 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.514 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.771 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.771 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:04.771 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.771 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.029 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.029 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:05.029 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.029 20:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.288 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:05.288 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:05.288 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:05.288 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:05.546 20:47:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.918 20:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:07.176 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.176 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.176 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.176 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.434 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.434 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.434 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.434 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:08.003 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.003 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:08.003 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.003 20:47:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:08.261 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:08.261 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:08.261 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.261 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.520 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:08.520 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:08.520 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:08.778 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:09.037 20:47:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:09.972 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:09.972 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:09.972 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:09.972 20:47:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.538 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:10.538 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:10.538 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.538 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:10.796 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.796 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.796 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.796 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:11.055 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.055 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:11.055 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.055 20:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:11.313 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.313 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:11.313 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.313 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.572 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:11.572 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:11.572 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.572 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:12.140 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.140 20:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:12.398 20:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:12.398 20:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:12.398 20:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:12.656 20:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:13.674 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:13.674 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:13.674 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.674 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:13.933 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.933 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:13.933 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.933 20:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:14.191 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.191 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:14.191 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.191 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:14.449 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.449 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:14.449 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.449 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.015 20:47:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:15.273 20:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.273 20:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:15.273 20:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:15.532 20:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:15.790 20:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:16.727 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:16.727 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:16.727 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.727 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:16.985 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:16.985 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:16.985 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.985 20:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.243 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.243 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.243 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.243 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.501 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.501 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.501 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.501 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.758 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.758 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:17.758 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.758 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.017 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.017 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.017 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.017 20:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.275 20:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.276 20:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:18.276 20:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:18.534 20:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:18.793 20:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:19.727 20:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:19.727 20:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.728 20:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:19.728 20:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.294 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.294 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:20.294 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.294 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.552 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.552 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.552 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:20.552 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.810 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.810 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:20.810 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.810 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:21.068 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.068 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:21.068 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.068 20:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:21.325 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.325 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:21.325 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.325 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.582 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.582 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:21.582 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:22.146 20:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:22.404 20:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:23.336 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:23.336 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:23.336 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.336 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:23.594 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.594 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:23.594 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.594 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:23.852 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.852 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:23.852 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.852 20:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:24.109 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.110 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:24.110 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:24.110 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.375 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.375 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:24.375 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.375 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:24.637 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.637 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:24.637 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.637 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77006 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77006 ']' 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77006 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77006 00:19:24.894 killing process with pid 77006 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77006' 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77006 00:19:24.894 20:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77006 00:19:24.894 { 00:19:24.894 "results": [ 00:19:24.894 { 00:19:24.894 "job": "Nvme0n1", 00:19:24.894 "core_mask": "0x4", 00:19:24.894 "workload": "verify", 00:19:24.894 "status": "terminated", 00:19:24.894 "verify_range": { 00:19:24.894 "start": 0, 00:19:24.894 "length": 16384 00:19:24.894 }, 00:19:24.894 "queue_depth": 128, 00:19:24.894 "io_size": 4096, 00:19:24.894 "runtime": 34.774913, 00:19:24.894 "iops": 9921.520148734808, 00:19:24.894 "mibps": 38.75593808099534, 00:19:24.894 "io_failed": 0, 00:19:24.894 "io_timeout": 0, 00:19:24.894 "avg_latency_us": 12878.741564585629, 00:19:24.894 "min_latency_us": 92.64761904761905, 00:19:24.894 "max_latency_us": 4026531.84 00:19:24.894 } 00:19:24.894 ], 00:19:24.894 "core_count": 1 00:19:24.894 } 00:19:25.156 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77006 00:19:25.156 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.156 [2024-11-26 20:46:43.043045] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:25.156 [2024-11-26 20:46:43.043190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77006 ] 00:19:25.156 [2024-11-26 20:46:43.203564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.156 [2024-11-26 20:46:43.273614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.156 [2024-11-26 20:46:43.336987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.156 Running I/O for 90 seconds... 00:19:25.156 11240.00 IOPS, 43.91 MiB/s [2024-11-26T20:47:20.149Z] 11176.00 IOPS, 43.66 MiB/s [2024-11-26T20:47:20.149Z] 10806.67 IOPS, 42.21 MiB/s [2024-11-26T20:47:20.149Z] 10680.00 IOPS, 41.72 MiB/s [2024-11-26T20:47:20.149Z] 10713.60 IOPS, 41.85 MiB/s [2024-11-26T20:47:20.149Z] 10621.00 IOPS, 41.49 MiB/s [2024-11-26T20:47:20.149Z] 10563.86 IOPS, 41.27 MiB/s [2024-11-26T20:47:20.149Z] 10635.38 IOPS, 41.54 MiB/s [2024-11-26T20:47:20.149Z] 10660.78 IOPS, 41.64 MiB/s [2024-11-26T20:47:20.149Z] 10563.50 IOPS, 41.26 MiB/s [2024-11-26T20:47:20.149Z] 10481.73 IOPS, 40.94 MiB/s [2024-11-26T20:47:20.149Z] 10401.42 IOPS, 40.63 MiB/s [2024-11-26T20:47:20.149Z] 10331.15 IOPS, 40.36 MiB/s [2024-11-26T20:47:20.149Z] 10289.21 IOPS, 40.19 MiB/s [2024-11-26T20:47:20.150Z] 10383.00 IOPS, 40.56 MiB/s [2024-11-26T20:47:20.150Z] [2024-11-26 20:47:00.230823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.230906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.230957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.230973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.230993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.231774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.231972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.231991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.232005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.232025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.157 [2024-11-26 20:47:00.232039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.157 [2024-11-26 20:47:00.232060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.157 [2024-11-26 20:47:00.232076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.158 [2024-11-26 20:47:00.232647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.232965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.232980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.233001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.233022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.233043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.233058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.233079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.233094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.233115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.233130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.233151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.233165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.233188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.233211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.158 [2024-11-26 20:47:00.233232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.158 [2024-11-26 20:47:00.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.233551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.233975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.233988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.234020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.234053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.234086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.159 [2024-11-26 20:47:00.234383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.234415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.159 [2024-11-26 20:47:00.234448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.159 [2024-11-26 20:47:00.234467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.234914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.234979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.234998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.235011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.235044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.235077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.235109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.235142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.160 [2024-11-26 20:47:00.235833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.235878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.235923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.235966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.235993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.236008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.236035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.236050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.236078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.236096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.236123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.236138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.160 [2024-11-26 20:47:00.236188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.160 [2024-11-26 20:47:00.236205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.160 9839.06 IOPS, 38.43 MiB/s [2024-11-26T20:47:20.153Z] 9260.29 IOPS, 36.17 MiB/s [2024-11-26T20:47:20.153Z] 8745.83 IOPS, 34.16 MiB/s [2024-11-26T20:47:20.153Z] 8285.53 IOPS, 32.37 MiB/s [2024-11-26T20:47:20.153Z] 8268.90 IOPS, 32.30 MiB/s [2024-11-26T20:47:20.153Z] 8318.48 IOPS, 32.49 MiB/s [2024-11-26T20:47:20.153Z] 8418.86 IOPS, 32.89 MiB/s [2024-11-26T20:47:20.153Z] 8668.04 IOPS, 33.86 MiB/s [2024-11-26T20:47:20.153Z] 8953.29 IOPS, 34.97 MiB/s [2024-11-26T20:47:20.153Z] 9212.08 IOPS, 35.98 MiB/s [2024-11-26T20:47:20.153Z] 9345.00 IOPS, 36.50 MiB/s [2024-11-26T20:47:20.154Z] 9424.67 IOPS, 36.82 MiB/s [2024-11-26T20:47:20.154Z] 9473.21 IOPS, 37.00 MiB/s [2024-11-26T20:47:20.154Z] 9508.28 IOPS, 37.14 MiB/s [2024-11-26T20:47:20.154Z] 9596.20 IOPS, 37.49 MiB/s [2024-11-26T20:47:20.154Z] 9709.81 IOPS, 37.93 MiB/s [2024-11-26T20:47:20.154Z] 9807.72 IOPS, 38.31 MiB/s [2024-11-26T20:47:20.154Z] [2024-11-26 20:47:17.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.153978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.153994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.154246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.154285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.154326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.161 [2024-11-26 20:47:17.154782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.154817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.154852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.161 [2024-11-26 20:47:17.154872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.161 [2024-11-26 20:47:17.154888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.154909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.154923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.154944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.154959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.154980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.154995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.155146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.155200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.155514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.155553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.155592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.155654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.155670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.156959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.157001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.157026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.162 [2024-11-26 20:47:17.157040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.157060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.157073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.157092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.157106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.162 [2024-11-26 20:47:17.157148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.162 [2024-11-26 20:47:17.157163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.157199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.157246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.157282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.157335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.157727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.157771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.157811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.157861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.157901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.157940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.157963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.157980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.163 [2024-11-26 20:47:17.158607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.163 [2024-11-26 20:47:17.158821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.163 [2024-11-26 20:47:17.158835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.158878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.158892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.158911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.158925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.158944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.158958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.158977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.158991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.159010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.159023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.159042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.159055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.159074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.159087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.159106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.159119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.159138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.159153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.159172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.159195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.160515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.160633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.160721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.160796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.160878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.160921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.160957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.160978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.160992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.161013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.161040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.161059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.164 [2024-11-26 20:47:17.161072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.161091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.161105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.161124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.161138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.162085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.162116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.162142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.164 [2024-11-26 20:47:17.162159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.164 [2024-11-26 20:47:17.162182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.162875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.162899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.162926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.163751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.163783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.163810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.163826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.163850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.163889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.163905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.163927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.163945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.163968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.163984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.164007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.164022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.164046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.164062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.164086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.164102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.165487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.165512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.165544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.165558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.165577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.165590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.165609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.165 [2024-11-26 20:47:17.165623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.165642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.165655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.165 [2024-11-26 20:47:17.165682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.165 [2024-11-26 20:47:17.165696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.165716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.165729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.165765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.165780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.165800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.165815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.165852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.165868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.165890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.165906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.165928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.165961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.165984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.166740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.166763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.166779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.167739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.166 [2024-11-26 20:47:17.167768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.167794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.167810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.167833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.167850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.167873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.166 [2024-11-26 20:47:17.167889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.166 [2024-11-26 20:47:17.167912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.167928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.167950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.167966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.168711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.168734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.168750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.169405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.169432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.169458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.169474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.169497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.169513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.169542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.169561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.169584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.167 [2024-11-26 20:47:17.169600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.167 [2024-11-26 20:47:17.169634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.167 [2024-11-26 20:47:17.169650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.169698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.169737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.169775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.169812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.169850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.169918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.169957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.169980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.169996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.170018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.170035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.170057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.170074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.170096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.170112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.170135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.170151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.170174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.170228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.170252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.170268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.170291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.170308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.171628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.171672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.171712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.171751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.171790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.171829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.171871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.171910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.171949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.171972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.171988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.172023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.172039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.172062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.168 [2024-11-26 20:47:17.172078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.172100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.172117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.172140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.172167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.173812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.173842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.168 [2024-11-26 20:47:17.173880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.168 [2024-11-26 20:47:17.173898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.173937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.173953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.173976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.173992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.174847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.174977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.174995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.175020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.175055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.175081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.175100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.175126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.169 [2024-11-26 20:47:17.175145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.175171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.175200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.175226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.169 [2024-11-26 20:47:17.175253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.169 [2024-11-26 20:47:17.175279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.170 [2024-11-26 20:47:17.175319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.170 [2024-11-26 20:47:17.175343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.170 [2024-11-26 20:47:17.175359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.170 9816.36 IOPS, 38.35 MiB/s [2024-11-26T20:47:20.163Z] 9878.24 IOPS, 38.59 MiB/s [2024-11-26T20:47:20.163Z] Received shutdown signal, test time was about 34.775579 seconds 00:19:25.170 00:19:25.170 Latency(us) 00:19:25.170 [2024-11-26T20:47:20.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.170 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.170 Verification LBA range: start 0x0 length 0x4000 00:19:25.170 Nvme0n1 : 34.77 9921.52 38.76 0.00 0.00 12878.74 92.65 4026531.84 00:19:25.170 [2024-11-26T20:47:20.163Z] =================================================================================================================== 00:19:25.170 [2024-11-26T20:47:20.163Z] Total : 9921.52 38.76 0.00 0.00 12878.74 92.65 4026531.84 00:19:25.170 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.428 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:25.428 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.428 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:25.428 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.428 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.686 rmmod nvme_tcp 00:19:25.686 rmmod nvme_fabrics 00:19:25.686 rmmod nvme_keyring 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76946 ']' 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76946 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76946 ']' 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76946 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76946 00:19:25.686 killing process with pid 76946 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76946' 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76946 00:19:25.686 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76946 00:19:25.965 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:25.966 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:26.279 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:26.279 20:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:26.279 00:19:26.279 real 0m41.598s 00:19:26.279 user 2m9.766s 00:19:26.279 sys 0m15.655s 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:26.279 ************************************ 00:19:26.279 END TEST nvmf_host_multipath_status 00:19:26.279 ************************************ 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.279 ************************************ 00:19:26.279 START TEST nvmf_discovery_remove_ifc 00:19:26.279 ************************************ 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:26.279 * Looking for test storage... 00:19:26.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.279 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.537 --rc genhtml_branch_coverage=1 00:19:26.537 --rc genhtml_function_coverage=1 00:19:26.537 --rc genhtml_legend=1 00:19:26.537 --rc geninfo_all_blocks=1 00:19:26.537 --rc geninfo_unexecuted_blocks=1 00:19:26.537 00:19:26.537 ' 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.537 --rc genhtml_branch_coverage=1 00:19:26.537 --rc genhtml_function_coverage=1 00:19:26.537 --rc genhtml_legend=1 00:19:26.537 --rc geninfo_all_blocks=1 00:19:26.537 --rc geninfo_unexecuted_blocks=1 00:19:26.537 00:19:26.537 ' 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.537 --rc genhtml_branch_coverage=1 00:19:26.537 --rc genhtml_function_coverage=1 00:19:26.537 --rc genhtml_legend=1 00:19:26.537 --rc geninfo_all_blocks=1 00:19:26.537 --rc geninfo_unexecuted_blocks=1 00:19:26.537 00:19:26.537 ' 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.537 --rc genhtml_branch_coverage=1 00:19:26.537 --rc genhtml_function_coverage=1 00:19:26.537 --rc genhtml_legend=1 00:19:26.537 --rc geninfo_all_blocks=1 00:19:26.537 --rc geninfo_unexecuted_blocks=1 00:19:26.537 00:19:26.537 ' 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.537 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.538 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:26.538 Cannot find device "nvmf_init_br" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:26.538 Cannot find device "nvmf_init_br2" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:26.538 Cannot find device "nvmf_tgt_br" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.538 Cannot find device "nvmf_tgt_br2" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:26.538 Cannot find device "nvmf_init_br" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:26.538 Cannot find device "nvmf_init_br2" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:26.538 Cannot find device "nvmf_tgt_br" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:26.538 Cannot find device "nvmf_tgt_br2" 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:26.538 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:26.538 Cannot find device "nvmf_br" 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:26.797 Cannot find device "nvmf_init_if" 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:26.797 Cannot find device "nvmf_init_if2" 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:26.797 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:27.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:19:27.057 00:19:27.057 --- 10.0.0.3 ping statistics --- 00:19:27.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.057 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:27.057 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:27.057 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:19:27.057 00:19:27.057 --- 10.0.0.4 ping statistics --- 00:19:27.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.057 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:27.057 00:19:27.057 --- 10.0.0.1 ping statistics --- 00:19:27.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.057 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:27.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:19:27.057 00:19:27.057 --- 10.0.0.2 ping statistics --- 00:19:27.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.057 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77869 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77869 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77869 ']' 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.057 20:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.057 [2024-11-26 20:47:21.906572] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:27.057 [2024-11-26 20:47:21.907282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.316 [2024-11-26 20:47:22.057791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.316 [2024-11-26 20:47:22.125531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.316 [2024-11-26 20:47:22.125590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.316 [2024-11-26 20:47:22.125602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.316 [2024-11-26 20:47:22.125612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.316 [2024-11-26 20:47:22.125621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.316 [2024-11-26 20:47:22.126035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.316 [2024-11-26 20:47:22.208612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.249 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.250 20:47:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.250 [2024-11-26 20:47:23.009967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.250 [2024-11-26 20:47:23.018149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:28.250 null0 00:19:28.250 [2024-11-26 20:47:23.050028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77901 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77901 /tmp/host.sock 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77901 ']' 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.250 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.250 20:47:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.250 [2024-11-26 20:47:23.134632] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:28.250 [2024-11-26 20:47:23.134729] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77901 ] 00:19:28.507 [2024-11-26 20:47:23.296692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.507 [2024-11-26 20:47:23.360683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.443 [2024-11-26 20:47:24.213675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.443 20:47:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.377 [2024-11-26 20:47:25.282875] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:30.377 [2024-11-26 20:47:25.282911] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:30.377 [2024-11-26 20:47:25.282944] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:30.377 [2024-11-26 20:47:25.288920] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:30.377 [2024-11-26 20:47:25.343285] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:30.377 [2024-11-26 20:47:25.344291] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x220b000:1 started. 00:19:30.377 [2024-11-26 20:47:25.345990] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:30.377 [2024-11-26 20:47:25.346058] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:30.377 [2024-11-26 20:47:25.346080] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:30.377 [2024-11-26 20:47:25.346094] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:30.377 [2024-11-26 20:47:25.346117] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.377 [2024-11-26 20:47:25.351532] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x220b000 was disconnected and freed. delete nvme_qpair. 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.377 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:30.636 20:47:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:31.572 20:47:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:32.946 20:47:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:33.893 20:47:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:34.827 20:47:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:35.762 20:47:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:36.021 [2024-11-26 20:47:30.773770] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:36.021 [2024-11-26 20:47:30.773827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.021 [2024-11-26 20:47:30.773840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.021 [2024-11-26 20:47:30.773855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.021 [2024-11-26 20:47:30.773865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.021 [2024-11-26 20:47:30.773875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.021 [2024-11-26 20:47:30.773884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.021 [2024-11-26 20:47:30.773894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.021 [2024-11-26 20:47:30.773902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.021 [2024-11-26 20:47:30.773912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.021 [2024-11-26 20:47:30.773921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.021 [2024-11-26 20:47:30.773930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7250 is same with the state(6) to be set 00:19:36.021 [2024-11-26 20:47:30.783764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e7250 (9): Bad file descriptor 00:19:36.021 [2024-11-26 20:47:30.793783] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:36.021 [2024-11-26 20:47:30.793799] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:36.021 [2024-11-26 20:47:30.793805] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:36.021 [2024-11-26 20:47:30.793811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:36.021 [2024-11-26 20:47:30.793842] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.958 [2024-11-26 20:47:31.849223] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:36.958 [2024-11-26 20:47:31.849338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e7250 with addr=10.0.0.3, port=4420 00:19:36.958 [2024-11-26 20:47:31.849381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7250 is same with the state(6) to be set 00:19:36.958 [2024-11-26 20:47:31.849456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e7250 (9): Bad file descriptor 00:19:36.958 [2024-11-26 20:47:31.850411] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:36.958 [2024-11-26 20:47:31.850505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:36.958 [2024-11-26 20:47:31.850534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:36.958 [2024-11-26 20:47:31.850563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:36.958 [2024-11-26 20:47:31.850588] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:36.958 [2024-11-26 20:47:31.850606] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:36.958 [2024-11-26 20:47:31.850622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:36.958 [2024-11-26 20:47:31.850652] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:36.958 [2024-11-26 20:47:31.850669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.958 20:47:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:37.895 [2024-11-26 20:47:32.850763] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:37.895 [2024-11-26 20:47:32.851028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:37.895 [2024-11-26 20:47:32.851067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:37.895 [2024-11-26 20:47:32.851077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:37.895 [2024-11-26 20:47:32.851091] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:37.895 [2024-11-26 20:47:32.851102] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:37.895 [2024-11-26 20:47:32.851109] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:37.895 [2024-11-26 20:47:32.851115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:37.895 [2024-11-26 20:47:32.851151] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:37.895 [2024-11-26 20:47:32.851219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.895 [2024-11-26 20:47:32.851233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.896 [2024-11-26 20:47:32.851248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.896 [2024-11-26 20:47:32.851258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.896 [2024-11-26 20:47:32.851268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.896 [2024-11-26 20:47:32.851277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.896 [2024-11-26 20:47:32.851287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.896 [2024-11-26 20:47:32.851303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.896 [2024-11-26 20:47:32.851313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.896 [2024-11-26 20:47:32.851322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.896 [2024-11-26 20:47:32.851347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:37.896 [2024-11-26 20:47:32.851390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2172a20 (9): Bad file descriptor 00:19:37.896 [2024-11-26 20:47:32.852376] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:37.896 [2024-11-26 20:47:32.852389] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:38.155 20:47:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.155 20:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:38.155 20:47:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:39.091 20:47:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:40.024 [2024-11-26 20:47:34.857339] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:40.024 [2024-11-26 20:47:34.857375] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:40.024 [2024-11-26 20:47:34.857389] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:40.024 [2024-11-26 20:47:34.863374] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:40.024 [2024-11-26 20:47:34.917725] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:40.024 [2024-11-26 20:47:34.918599] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x21f2d80:1 started. 00:19:40.024 [2024-11-26 20:47:34.919835] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:40.024 [2024-11-26 20:47:34.919877] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:40.024 [2024-11-26 20:47:34.919898] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:40.024 [2024-11-26 20:47:34.919915] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:40.024 [2024-11-26 20:47:34.919924] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:40.024 [2024-11-26 20:47:34.926048] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x21f2d80 was disconnected and freed. delete nvme_qpair. 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77901 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77901 ']' 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77901 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77901 00:19:40.283 killing process with pid 77901 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77901' 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77901 00:19:40.283 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77901 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:40.542 rmmod nvme_tcp 00:19:40.542 rmmod nvme_fabrics 00:19:40.542 rmmod nvme_keyring 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77869 ']' 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77869 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77869 ']' 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77869 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77869 00:19:40.542 killing process with pid 77869 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77869' 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77869 00:19:40.542 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77869 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:40.801 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.060 20:47:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.060 20:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:41.060 00:19:41.060 real 0m14.867s 00:19:41.060 user 0m24.410s 00:19:41.060 sys 0m3.443s 00:19:41.060 20:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.060 ************************************ 00:19:41.060 END TEST nvmf_discovery_remove_ifc 00:19:41.060 ************************************ 00:19:41.060 20:47:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.320 ************************************ 00:19:41.320 START TEST nvmf_identify_kernel_target 00:19:41.320 ************************************ 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:41.320 * Looking for test storage... 00:19:41.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:41.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.320 --rc genhtml_branch_coverage=1 00:19:41.320 --rc genhtml_function_coverage=1 00:19:41.320 --rc genhtml_legend=1 00:19:41.320 --rc geninfo_all_blocks=1 00:19:41.320 --rc geninfo_unexecuted_blocks=1 00:19:41.320 00:19:41.320 ' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:41.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.320 --rc genhtml_branch_coverage=1 00:19:41.320 --rc genhtml_function_coverage=1 00:19:41.320 --rc genhtml_legend=1 00:19:41.320 --rc geninfo_all_blocks=1 00:19:41.320 --rc geninfo_unexecuted_blocks=1 00:19:41.320 00:19:41.320 ' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:41.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.320 --rc genhtml_branch_coverage=1 00:19:41.320 --rc genhtml_function_coverage=1 00:19:41.320 --rc genhtml_legend=1 00:19:41.320 --rc geninfo_all_blocks=1 00:19:41.320 --rc geninfo_unexecuted_blocks=1 00:19:41.320 00:19:41.320 ' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:41.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.320 --rc genhtml_branch_coverage=1 00:19:41.320 --rc genhtml_function_coverage=1 00:19:41.320 --rc genhtml_legend=1 00:19:41.320 --rc geninfo_all_blocks=1 00:19:41.320 --rc geninfo_unexecuted_blocks=1 00:19:41.320 00:19:41.320 ' 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.320 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.321 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.321 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:41.580 Cannot find device "nvmf_init_br" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:41.580 Cannot find device "nvmf_init_br2" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:41.580 Cannot find device "nvmf_tgt_br" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.580 Cannot find device "nvmf_tgt_br2" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:41.580 Cannot find device "nvmf_init_br" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:41.580 Cannot find device "nvmf_init_br2" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:41.580 Cannot find device "nvmf_tgt_br" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:41.580 Cannot find device "nvmf_tgt_br2" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:41.580 Cannot find device "nvmf_br" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:41.580 Cannot find device "nvmf_init_if" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:41.580 Cannot find device "nvmf_init_if2" 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:41.580 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:41.581 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:41.581 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:41.581 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.581 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.839 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.839 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:41.839 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:41.839 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.839 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:41.839 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.839 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:41.840 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.840 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:41.840 00:19:41.840 --- 10.0.0.3 ping statistics --- 00:19:41.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.840 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:41.840 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:41.840 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:19:41.840 00:19:41.840 --- 10.0.0.4 ping statistics --- 00:19:41.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.840 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:41.840 00:19:41.840 --- 10.0.0.1 ping statistics --- 00:19:41.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.840 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:41.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:41.840 00:19:41.840 --- 10.0.0.2 ping statistics --- 00:19:41.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.840 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:41.840 20:47:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:42.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.355 Waiting for block devices as requested 00:19:42.355 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:42.355 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:42.613 No valid GPT data, bailing 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:42.613 No valid GPT data, bailing 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:42.613 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:42.870 No valid GPT data, bailing 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:42.870 No valid GPT data, bailing 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -a 10.0.0.1 -t tcp -s 4420 00:19:42.870 00:19:42.870 Discovery Log Number of Records 2, Generation counter 2 00:19:42.870 =====Discovery Log Entry 0====== 00:19:42.870 trtype: tcp 00:19:42.870 adrfam: ipv4 00:19:42.870 subtype: current discovery subsystem 00:19:42.870 treq: not specified, sq flow control disable supported 00:19:42.870 portid: 1 00:19:42.870 trsvcid: 4420 00:19:42.870 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:42.870 traddr: 10.0.0.1 00:19:42.870 eflags: none 00:19:42.870 sectype: none 00:19:42.870 =====Discovery Log Entry 1====== 00:19:42.870 trtype: tcp 00:19:42.870 adrfam: ipv4 00:19:42.870 subtype: nvme subsystem 00:19:42.870 treq: not specified, sq flow control disable supported 00:19:42.870 portid: 1 00:19:42.870 trsvcid: 4420 00:19:42.870 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:42.870 traddr: 10.0.0.1 00:19:42.870 eflags: none 00:19:42.870 sectype: none 00:19:42.870 20:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:42.870 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:43.128 ===================================================== 00:19:43.128 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:43.128 ===================================================== 00:19:43.128 Controller Capabilities/Features 00:19:43.128 ================================ 00:19:43.128 Vendor ID: 0000 00:19:43.128 Subsystem Vendor ID: 0000 00:19:43.128 Serial Number: 19979a0bb8a5f7419746 00:19:43.128 Model Number: Linux 00:19:43.128 Firmware Version: 6.8.9-20 00:19:43.128 Recommended Arb Burst: 0 00:19:43.128 IEEE OUI Identifier: 00 00 00 00:19:43.128 Multi-path I/O 00:19:43.128 May have multiple subsystem ports: No 00:19:43.128 May have multiple controllers: No 00:19:43.128 Associated with SR-IOV VF: No 00:19:43.128 Max Data Transfer Size: Unlimited 00:19:43.128 Max Number of Namespaces: 0 00:19:43.128 Max Number of I/O Queues: 1024 00:19:43.128 NVMe Specification Version (VS): 1.3 00:19:43.128 NVMe Specification Version (Identify): 1.3 00:19:43.128 Maximum Queue Entries: 1024 00:19:43.128 Contiguous Queues Required: No 00:19:43.128 Arbitration Mechanisms Supported 00:19:43.128 Weighted Round Robin: Not Supported 00:19:43.128 Vendor Specific: Not Supported 00:19:43.128 Reset Timeout: 7500 ms 00:19:43.128 Doorbell Stride: 4 bytes 00:19:43.128 NVM Subsystem Reset: Not Supported 00:19:43.128 Command Sets Supported 00:19:43.128 NVM Command Set: Supported 00:19:43.128 Boot Partition: Not Supported 00:19:43.128 Memory Page Size Minimum: 4096 bytes 00:19:43.128 Memory Page Size Maximum: 4096 bytes 00:19:43.128 Persistent Memory Region: Not Supported 00:19:43.128 Optional Asynchronous Events Supported 00:19:43.128 Namespace Attribute Notices: Not Supported 00:19:43.128 Firmware Activation Notices: Not Supported 00:19:43.128 ANA Change Notices: Not Supported 00:19:43.128 PLE Aggregate Log Change Notices: Not Supported 00:19:43.128 LBA Status Info Alert Notices: Not Supported 00:19:43.128 EGE Aggregate Log Change Notices: Not Supported 00:19:43.128 Normal NVM Subsystem Shutdown event: Not Supported 00:19:43.128 Zone Descriptor Change Notices: Not Supported 00:19:43.128 Discovery Log Change Notices: Supported 00:19:43.128 Controller Attributes 00:19:43.128 128-bit Host Identifier: Not Supported 00:19:43.128 Non-Operational Permissive Mode: Not Supported 00:19:43.128 NVM Sets: Not Supported 00:19:43.128 Read Recovery Levels: Not Supported 00:19:43.128 Endurance Groups: Not Supported 00:19:43.128 Predictable Latency Mode: Not Supported 00:19:43.128 Traffic Based Keep ALive: Not Supported 00:19:43.128 Namespace Granularity: Not Supported 00:19:43.128 SQ Associations: Not Supported 00:19:43.128 UUID List: Not Supported 00:19:43.128 Multi-Domain Subsystem: Not Supported 00:19:43.128 Fixed Capacity Management: Not Supported 00:19:43.128 Variable Capacity Management: Not Supported 00:19:43.128 Delete Endurance Group: Not Supported 00:19:43.128 Delete NVM Set: Not Supported 00:19:43.128 Extended LBA Formats Supported: Not Supported 00:19:43.128 Flexible Data Placement Supported: Not Supported 00:19:43.128 00:19:43.128 Controller Memory Buffer Support 00:19:43.128 ================================ 00:19:43.128 Supported: No 00:19:43.128 00:19:43.128 Persistent Memory Region Support 00:19:43.128 ================================ 00:19:43.128 Supported: No 00:19:43.128 00:19:43.128 Admin Command Set Attributes 00:19:43.128 ============================ 00:19:43.128 Security Send/Receive: Not Supported 00:19:43.128 Format NVM: Not Supported 00:19:43.128 Firmware Activate/Download: Not Supported 00:19:43.128 Namespace Management: Not Supported 00:19:43.128 Device Self-Test: Not Supported 00:19:43.128 Directives: Not Supported 00:19:43.128 NVMe-MI: Not Supported 00:19:43.128 Virtualization Management: Not Supported 00:19:43.128 Doorbell Buffer Config: Not Supported 00:19:43.128 Get LBA Status Capability: Not Supported 00:19:43.128 Command & Feature Lockdown Capability: Not Supported 00:19:43.128 Abort Command Limit: 1 00:19:43.128 Async Event Request Limit: 1 00:19:43.128 Number of Firmware Slots: N/A 00:19:43.128 Firmware Slot 1 Read-Only: N/A 00:19:43.128 Firmware Activation Without Reset: N/A 00:19:43.128 Multiple Update Detection Support: N/A 00:19:43.128 Firmware Update Granularity: No Information Provided 00:19:43.128 Per-Namespace SMART Log: No 00:19:43.128 Asymmetric Namespace Access Log Page: Not Supported 00:19:43.128 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:43.128 Command Effects Log Page: Not Supported 00:19:43.128 Get Log Page Extended Data: Supported 00:19:43.128 Telemetry Log Pages: Not Supported 00:19:43.128 Persistent Event Log Pages: Not Supported 00:19:43.128 Supported Log Pages Log Page: May Support 00:19:43.128 Commands Supported & Effects Log Page: Not Supported 00:19:43.128 Feature Identifiers & Effects Log Page:May Support 00:19:43.128 NVMe-MI Commands & Effects Log Page: May Support 00:19:43.128 Data Area 4 for Telemetry Log: Not Supported 00:19:43.128 Error Log Page Entries Supported: 1 00:19:43.128 Keep Alive: Not Supported 00:19:43.128 00:19:43.128 NVM Command Set Attributes 00:19:43.128 ========================== 00:19:43.128 Submission Queue Entry Size 00:19:43.128 Max: 1 00:19:43.128 Min: 1 00:19:43.128 Completion Queue Entry Size 00:19:43.128 Max: 1 00:19:43.128 Min: 1 00:19:43.128 Number of Namespaces: 0 00:19:43.128 Compare Command: Not Supported 00:19:43.128 Write Uncorrectable Command: Not Supported 00:19:43.128 Dataset Management Command: Not Supported 00:19:43.128 Write Zeroes Command: Not Supported 00:19:43.128 Set Features Save Field: Not Supported 00:19:43.128 Reservations: Not Supported 00:19:43.128 Timestamp: Not Supported 00:19:43.128 Copy: Not Supported 00:19:43.128 Volatile Write Cache: Not Present 00:19:43.128 Atomic Write Unit (Normal): 1 00:19:43.128 Atomic Write Unit (PFail): 1 00:19:43.128 Atomic Compare & Write Unit: 1 00:19:43.128 Fused Compare & Write: Not Supported 00:19:43.128 Scatter-Gather List 00:19:43.128 SGL Command Set: Supported 00:19:43.128 SGL Keyed: Not Supported 00:19:43.128 SGL Bit Bucket Descriptor: Not Supported 00:19:43.128 SGL Metadata Pointer: Not Supported 00:19:43.128 Oversized SGL: Not Supported 00:19:43.128 SGL Metadata Address: Not Supported 00:19:43.128 SGL Offset: Supported 00:19:43.128 Transport SGL Data Block: Not Supported 00:19:43.128 Replay Protected Memory Block: Not Supported 00:19:43.128 00:19:43.128 Firmware Slot Information 00:19:43.128 ========================= 00:19:43.128 Active slot: 0 00:19:43.128 00:19:43.128 00:19:43.128 Error Log 00:19:43.128 ========= 00:19:43.128 00:19:43.128 Active Namespaces 00:19:43.128 ================= 00:19:43.128 Discovery Log Page 00:19:43.128 ================== 00:19:43.128 Generation Counter: 2 00:19:43.128 Number of Records: 2 00:19:43.128 Record Format: 0 00:19:43.128 00:19:43.128 Discovery Log Entry 0 00:19:43.128 ---------------------- 00:19:43.128 Transport Type: 3 (TCP) 00:19:43.128 Address Family: 1 (IPv4) 00:19:43.128 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:43.128 Entry Flags: 00:19:43.128 Duplicate Returned Information: 0 00:19:43.128 Explicit Persistent Connection Support for Discovery: 0 00:19:43.128 Transport Requirements: 00:19:43.128 Secure Channel: Not Specified 00:19:43.128 Port ID: 1 (0x0001) 00:19:43.128 Controller ID: 65535 (0xffff) 00:19:43.128 Admin Max SQ Size: 32 00:19:43.128 Transport Service Identifier: 4420 00:19:43.128 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:43.128 Transport Address: 10.0.0.1 00:19:43.128 Discovery Log Entry 1 00:19:43.128 ---------------------- 00:19:43.128 Transport Type: 3 (TCP) 00:19:43.128 Address Family: 1 (IPv4) 00:19:43.128 Subsystem Type: 2 (NVM Subsystem) 00:19:43.128 Entry Flags: 00:19:43.129 Duplicate Returned Information: 0 00:19:43.129 Explicit Persistent Connection Support for Discovery: 0 00:19:43.129 Transport Requirements: 00:19:43.129 Secure Channel: Not Specified 00:19:43.129 Port ID: 1 (0x0001) 00:19:43.129 Controller ID: 65535 (0xffff) 00:19:43.129 Admin Max SQ Size: 32 00:19:43.129 Transport Service Identifier: 4420 00:19:43.129 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:43.129 Transport Address: 10.0.0.1 00:19:43.129 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:43.387 get_feature(0x01) failed 00:19:43.387 get_feature(0x02) failed 00:19:43.387 get_feature(0x04) failed 00:19:43.387 ===================================================== 00:19:43.387 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:43.387 ===================================================== 00:19:43.387 Controller Capabilities/Features 00:19:43.387 ================================ 00:19:43.387 Vendor ID: 0000 00:19:43.387 Subsystem Vendor ID: 0000 00:19:43.387 Serial Number: e4fdd54f70c200ea87ca 00:19:43.387 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:43.387 Firmware Version: 6.8.9-20 00:19:43.387 Recommended Arb Burst: 6 00:19:43.387 IEEE OUI Identifier: 00 00 00 00:19:43.387 Multi-path I/O 00:19:43.387 May have multiple subsystem ports: Yes 00:19:43.387 May have multiple controllers: Yes 00:19:43.387 Associated with SR-IOV VF: No 00:19:43.387 Max Data Transfer Size: Unlimited 00:19:43.387 Max Number of Namespaces: 1024 00:19:43.387 Max Number of I/O Queues: 128 00:19:43.387 NVMe Specification Version (VS): 1.3 00:19:43.387 NVMe Specification Version (Identify): 1.3 00:19:43.387 Maximum Queue Entries: 1024 00:19:43.387 Contiguous Queues Required: No 00:19:43.387 Arbitration Mechanisms Supported 00:19:43.387 Weighted Round Robin: Not Supported 00:19:43.387 Vendor Specific: Not Supported 00:19:43.387 Reset Timeout: 7500 ms 00:19:43.387 Doorbell Stride: 4 bytes 00:19:43.387 NVM Subsystem Reset: Not Supported 00:19:43.387 Command Sets Supported 00:19:43.387 NVM Command Set: Supported 00:19:43.387 Boot Partition: Not Supported 00:19:43.387 Memory Page Size Minimum: 4096 bytes 00:19:43.387 Memory Page Size Maximum: 4096 bytes 00:19:43.387 Persistent Memory Region: Not Supported 00:19:43.387 Optional Asynchronous Events Supported 00:19:43.387 Namespace Attribute Notices: Supported 00:19:43.387 Firmware Activation Notices: Not Supported 00:19:43.387 ANA Change Notices: Supported 00:19:43.387 PLE Aggregate Log Change Notices: Not Supported 00:19:43.387 LBA Status Info Alert Notices: Not Supported 00:19:43.387 EGE Aggregate Log Change Notices: Not Supported 00:19:43.387 Normal NVM Subsystem Shutdown event: Not Supported 00:19:43.387 Zone Descriptor Change Notices: Not Supported 00:19:43.387 Discovery Log Change Notices: Not Supported 00:19:43.387 Controller Attributes 00:19:43.387 128-bit Host Identifier: Supported 00:19:43.387 Non-Operational Permissive Mode: Not Supported 00:19:43.387 NVM Sets: Not Supported 00:19:43.387 Read Recovery Levels: Not Supported 00:19:43.387 Endurance Groups: Not Supported 00:19:43.387 Predictable Latency Mode: Not Supported 00:19:43.387 Traffic Based Keep ALive: Supported 00:19:43.387 Namespace Granularity: Not Supported 00:19:43.387 SQ Associations: Not Supported 00:19:43.387 UUID List: Not Supported 00:19:43.387 Multi-Domain Subsystem: Not Supported 00:19:43.387 Fixed Capacity Management: Not Supported 00:19:43.387 Variable Capacity Management: Not Supported 00:19:43.387 Delete Endurance Group: Not Supported 00:19:43.387 Delete NVM Set: Not Supported 00:19:43.387 Extended LBA Formats Supported: Not Supported 00:19:43.387 Flexible Data Placement Supported: Not Supported 00:19:43.387 00:19:43.387 Controller Memory Buffer Support 00:19:43.387 ================================ 00:19:43.387 Supported: No 00:19:43.387 00:19:43.387 Persistent Memory Region Support 00:19:43.387 ================================ 00:19:43.387 Supported: No 00:19:43.387 00:19:43.387 Admin Command Set Attributes 00:19:43.387 ============================ 00:19:43.387 Security Send/Receive: Not Supported 00:19:43.387 Format NVM: Not Supported 00:19:43.387 Firmware Activate/Download: Not Supported 00:19:43.387 Namespace Management: Not Supported 00:19:43.387 Device Self-Test: Not Supported 00:19:43.387 Directives: Not Supported 00:19:43.388 NVMe-MI: Not Supported 00:19:43.388 Virtualization Management: Not Supported 00:19:43.388 Doorbell Buffer Config: Not Supported 00:19:43.388 Get LBA Status Capability: Not Supported 00:19:43.388 Command & Feature Lockdown Capability: Not Supported 00:19:43.388 Abort Command Limit: 4 00:19:43.388 Async Event Request Limit: 4 00:19:43.388 Number of Firmware Slots: N/A 00:19:43.388 Firmware Slot 1 Read-Only: N/A 00:19:43.388 Firmware Activation Without Reset: N/A 00:19:43.388 Multiple Update Detection Support: N/A 00:19:43.388 Firmware Update Granularity: No Information Provided 00:19:43.388 Per-Namespace SMART Log: Yes 00:19:43.388 Asymmetric Namespace Access Log Page: Supported 00:19:43.388 ANA Transition Time : 10 sec 00:19:43.388 00:19:43.388 Asymmetric Namespace Access Capabilities 00:19:43.388 ANA Optimized State : Supported 00:19:43.388 ANA Non-Optimized State : Supported 00:19:43.388 ANA Inaccessible State : Supported 00:19:43.388 ANA Persistent Loss State : Supported 00:19:43.388 ANA Change State : Supported 00:19:43.388 ANAGRPID is not changed : No 00:19:43.388 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:43.388 00:19:43.388 ANA Group Identifier Maximum : 128 00:19:43.388 Number of ANA Group Identifiers : 128 00:19:43.388 Max Number of Allowed Namespaces : 1024 00:19:43.388 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:43.388 Command Effects Log Page: Supported 00:19:43.388 Get Log Page Extended Data: Supported 00:19:43.388 Telemetry Log Pages: Not Supported 00:19:43.388 Persistent Event Log Pages: Not Supported 00:19:43.388 Supported Log Pages Log Page: May Support 00:19:43.388 Commands Supported & Effects Log Page: Not Supported 00:19:43.388 Feature Identifiers & Effects Log Page:May Support 00:19:43.388 NVMe-MI Commands & Effects Log Page: May Support 00:19:43.388 Data Area 4 for Telemetry Log: Not Supported 00:19:43.388 Error Log Page Entries Supported: 128 00:19:43.388 Keep Alive: Supported 00:19:43.388 Keep Alive Granularity: 1000 ms 00:19:43.388 00:19:43.388 NVM Command Set Attributes 00:19:43.388 ========================== 00:19:43.388 Submission Queue Entry Size 00:19:43.388 Max: 64 00:19:43.388 Min: 64 00:19:43.388 Completion Queue Entry Size 00:19:43.388 Max: 16 00:19:43.388 Min: 16 00:19:43.388 Number of Namespaces: 1024 00:19:43.388 Compare Command: Not Supported 00:19:43.388 Write Uncorrectable Command: Not Supported 00:19:43.388 Dataset Management Command: Supported 00:19:43.388 Write Zeroes Command: Supported 00:19:43.388 Set Features Save Field: Not Supported 00:19:43.388 Reservations: Not Supported 00:19:43.388 Timestamp: Not Supported 00:19:43.388 Copy: Not Supported 00:19:43.388 Volatile Write Cache: Present 00:19:43.388 Atomic Write Unit (Normal): 1 00:19:43.388 Atomic Write Unit (PFail): 1 00:19:43.388 Atomic Compare & Write Unit: 1 00:19:43.388 Fused Compare & Write: Not Supported 00:19:43.388 Scatter-Gather List 00:19:43.388 SGL Command Set: Supported 00:19:43.388 SGL Keyed: Not Supported 00:19:43.388 SGL Bit Bucket Descriptor: Not Supported 00:19:43.388 SGL Metadata Pointer: Not Supported 00:19:43.388 Oversized SGL: Not Supported 00:19:43.388 SGL Metadata Address: Not Supported 00:19:43.388 SGL Offset: Supported 00:19:43.388 Transport SGL Data Block: Not Supported 00:19:43.388 Replay Protected Memory Block: Not Supported 00:19:43.388 00:19:43.388 Firmware Slot Information 00:19:43.388 ========================= 00:19:43.388 Active slot: 0 00:19:43.388 00:19:43.388 Asymmetric Namespace Access 00:19:43.388 =========================== 00:19:43.388 Change Count : 0 00:19:43.388 Number of ANA Group Descriptors : 1 00:19:43.388 ANA Group Descriptor : 0 00:19:43.388 ANA Group ID : 1 00:19:43.388 Number of NSID Values : 1 00:19:43.388 Change Count : 0 00:19:43.388 ANA State : 1 00:19:43.388 Namespace Identifier : 1 00:19:43.388 00:19:43.388 Commands Supported and Effects 00:19:43.388 ============================== 00:19:43.388 Admin Commands 00:19:43.388 -------------- 00:19:43.388 Get Log Page (02h): Supported 00:19:43.388 Identify (06h): Supported 00:19:43.388 Abort (08h): Supported 00:19:43.388 Set Features (09h): Supported 00:19:43.388 Get Features (0Ah): Supported 00:19:43.388 Asynchronous Event Request (0Ch): Supported 00:19:43.388 Keep Alive (18h): Supported 00:19:43.388 I/O Commands 00:19:43.388 ------------ 00:19:43.388 Flush (00h): Supported 00:19:43.388 Write (01h): Supported LBA-Change 00:19:43.388 Read (02h): Supported 00:19:43.388 Write Zeroes (08h): Supported LBA-Change 00:19:43.388 Dataset Management (09h): Supported 00:19:43.388 00:19:43.388 Error Log 00:19:43.388 ========= 00:19:43.388 Entry: 0 00:19:43.388 Error Count: 0x3 00:19:43.388 Submission Queue Id: 0x0 00:19:43.388 Command Id: 0x5 00:19:43.388 Phase Bit: 0 00:19:43.388 Status Code: 0x2 00:19:43.388 Status Code Type: 0x0 00:19:43.388 Do Not Retry: 1 00:19:43.388 Error Location: 0x28 00:19:43.388 LBA: 0x0 00:19:43.388 Namespace: 0x0 00:19:43.388 Vendor Log Page: 0x0 00:19:43.388 ----------- 00:19:43.388 Entry: 1 00:19:43.388 Error Count: 0x2 00:19:43.388 Submission Queue Id: 0x0 00:19:43.388 Command Id: 0x5 00:19:43.388 Phase Bit: 0 00:19:43.388 Status Code: 0x2 00:19:43.388 Status Code Type: 0x0 00:19:43.388 Do Not Retry: 1 00:19:43.388 Error Location: 0x28 00:19:43.388 LBA: 0x0 00:19:43.388 Namespace: 0x0 00:19:43.388 Vendor Log Page: 0x0 00:19:43.388 ----------- 00:19:43.388 Entry: 2 00:19:43.388 Error Count: 0x1 00:19:43.388 Submission Queue Id: 0x0 00:19:43.388 Command Id: 0x4 00:19:43.388 Phase Bit: 0 00:19:43.388 Status Code: 0x2 00:19:43.388 Status Code Type: 0x0 00:19:43.388 Do Not Retry: 1 00:19:43.388 Error Location: 0x28 00:19:43.388 LBA: 0x0 00:19:43.388 Namespace: 0x0 00:19:43.388 Vendor Log Page: 0x0 00:19:43.388 00:19:43.388 Number of Queues 00:19:43.388 ================ 00:19:43.388 Number of I/O Submission Queues: 128 00:19:43.388 Number of I/O Completion Queues: 128 00:19:43.388 00:19:43.388 ZNS Specific Controller Data 00:19:43.388 ============================ 00:19:43.388 Zone Append Size Limit: 0 00:19:43.388 00:19:43.388 00:19:43.388 Active Namespaces 00:19:43.388 ================= 00:19:43.388 get_feature(0x05) failed 00:19:43.388 Namespace ID:1 00:19:43.388 Command Set Identifier: NVM (00h) 00:19:43.388 Deallocate: Supported 00:19:43.388 Deallocated/Unwritten Error: Not Supported 00:19:43.388 Deallocated Read Value: Unknown 00:19:43.388 Deallocate in Write Zeroes: Not Supported 00:19:43.388 Deallocated Guard Field: 0xFFFF 00:19:43.388 Flush: Supported 00:19:43.388 Reservation: Not Supported 00:19:43.388 Namespace Sharing Capabilities: Multiple Controllers 00:19:43.388 Size (in LBAs): 1310720 (5GiB) 00:19:43.388 Capacity (in LBAs): 1310720 (5GiB) 00:19:43.388 Utilization (in LBAs): 1310720 (5GiB) 00:19:43.388 UUID: 22439bca-ea9a-4a8c-93a9-df5a44d6bdf0 00:19:43.388 Thin Provisioning: Not Supported 00:19:43.388 Per-NS Atomic Units: Yes 00:19:43.388 Atomic Boundary Size (Normal): 0 00:19:43.388 Atomic Boundary Size (PFail): 0 00:19:43.388 Atomic Boundary Offset: 0 00:19:43.388 NGUID/EUI64 Never Reused: No 00:19:43.388 ANA group ID: 1 00:19:43.388 Namespace Write Protected: No 00:19:43.388 Number of LBA Formats: 1 00:19:43.388 Current LBA Format: LBA Format #00 00:19:43.388 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:43.388 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:43.388 rmmod nvme_tcp 00:19:43.388 rmmod nvme_fabrics 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.388 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:43.389 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:43.647 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:43.904 20:47:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:44.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.728 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:44.728 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:44.728 00:19:44.728 real 0m3.562s 00:19:44.728 user 0m1.197s 00:19:44.728 sys 0m1.792s 00:19:44.728 20:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.728 ************************************ 00:19:44.728 END TEST nvmf_identify_kernel_target 00:19:44.728 ************************************ 00:19:44.728 20:47:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.728 20:47:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:44.728 20:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.728 20:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.728 20:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.728 ************************************ 00:19:44.728 START TEST nvmf_auth_host 00:19:44.728 ************************************ 00:19:44.728 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:44.987 * Looking for test storage... 00:19:44.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.987 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.988 --rc genhtml_branch_coverage=1 00:19:44.988 --rc genhtml_function_coverage=1 00:19:44.988 --rc genhtml_legend=1 00:19:44.988 --rc geninfo_all_blocks=1 00:19:44.988 --rc geninfo_unexecuted_blocks=1 00:19:44.988 00:19:44.988 ' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.988 --rc genhtml_branch_coverage=1 00:19:44.988 --rc genhtml_function_coverage=1 00:19:44.988 --rc genhtml_legend=1 00:19:44.988 --rc geninfo_all_blocks=1 00:19:44.988 --rc geninfo_unexecuted_blocks=1 00:19:44.988 00:19:44.988 ' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.988 --rc genhtml_branch_coverage=1 00:19:44.988 --rc genhtml_function_coverage=1 00:19:44.988 --rc genhtml_legend=1 00:19:44.988 --rc geninfo_all_blocks=1 00:19:44.988 --rc geninfo_unexecuted_blocks=1 00:19:44.988 00:19:44.988 ' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.988 --rc genhtml_branch_coverage=1 00:19:44.988 --rc genhtml_function_coverage=1 00:19:44.988 --rc genhtml_legend=1 00:19:44.988 --rc geninfo_all_blocks=1 00:19:44.988 --rc geninfo_unexecuted_blocks=1 00:19:44.988 00:19:44.988 ' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.988 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.988 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:44.989 Cannot find device "nvmf_init_br" 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:44.989 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:45.248 Cannot find device "nvmf_init_br2" 00:19:45.248 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:45.248 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:45.248 Cannot find device "nvmf_tgt_br" 00:19:45.248 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:45.248 20:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.248 Cannot find device "nvmf_tgt_br2" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:45.248 Cannot find device "nvmf_init_br" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:45.248 Cannot find device "nvmf_init_br2" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:45.248 Cannot find device "nvmf_tgt_br" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:45.248 Cannot find device "nvmf_tgt_br2" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:45.248 Cannot find device "nvmf_br" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:45.248 Cannot find device "nvmf_init_if" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:45.248 Cannot find device "nvmf_init_if2" 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:45.248 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:45.249 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:45.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:19:45.508 00:19:45.508 --- 10.0.0.3 ping statistics --- 00:19:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.508 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:45.508 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:45.508 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:19:45.508 00:19:45.508 --- 10.0.0.4 ping statistics --- 00:19:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.508 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:45.508 00:19:45.508 --- 10.0.0.1 ping statistics --- 00:19:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.508 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:45.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:45.508 00:19:45.508 --- 10.0.0.2 ping statistics --- 00:19:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.508 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.508 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78903 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78903 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78903 ']' 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.509 20:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4b505c404d261bf00fe8097ba30fa64b 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.bzo 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4b505c404d261bf00fe8097ba30fa64b 0 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4b505c404d261bf00fe8097ba30fa64b 0 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4b505c404d261bf00fe8097ba30fa64b 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.bzo 00:19:46.892 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.bzo 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.bzo 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e51d4cf359d96d35b45cedd871f5a8ab480acb7d50e4cd50f7da7a9157c5914 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IIF 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e51d4cf359d96d35b45cedd871f5a8ab480acb7d50e4cd50f7da7a9157c5914 3 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e51d4cf359d96d35b45cedd871f5a8ab480acb7d50e4cd50f7da7a9157c5914 3 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e51d4cf359d96d35b45cedd871f5a8ab480acb7d50e4cd50f7da7a9157c5914 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IIF 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IIF 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.IIF 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=506fa6e64674d00c22541bb5307cd781ff6659fb06d6e4cf 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KqZ 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 506fa6e64674d00c22541bb5307cd781ff6659fb06d6e4cf 0 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 506fa6e64674d00c22541bb5307cd781ff6659fb06d6e4cf 0 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=506fa6e64674d00c22541bb5307cd781ff6659fb06d6e4cf 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KqZ 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KqZ 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.KqZ 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ea5a92b5ae53454941cf98286f505d06b61437424f0a51e2 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.m5y 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ea5a92b5ae53454941cf98286f505d06b61437424f0a51e2 2 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ea5a92b5ae53454941cf98286f505d06b61437424f0a51e2 2 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ea5a92b5ae53454941cf98286f505d06b61437424f0a51e2 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.m5y 00:19:46.893 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.m5y 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.m5y 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=151fec744acfa4ac3f6b256aedf49a1b 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jVl 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 151fec744acfa4ac3f6b256aedf49a1b 1 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 151fec744acfa4ac3f6b256aedf49a1b 1 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=151fec744acfa4ac3f6b256aedf49a1b 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:47.152 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jVl 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jVl 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jVl 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9f981a6aa4dc5b1a267a5ae2d4dbf6cf 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IXC 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9f981a6aa4dc5b1a267a5ae2d4dbf6cf 1 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9f981a6aa4dc5b1a267a5ae2d4dbf6cf 1 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9f981a6aa4dc5b1a267a5ae2d4dbf6cf 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:47.153 20:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IXC 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IXC 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.IXC 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac10a36026b1a24fd6be57792ed2104d4f62c2be685d1776 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.KwF 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac10a36026b1a24fd6be57792ed2104d4f62c2be685d1776 2 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac10a36026b1a24fd6be57792ed2104d4f62c2be685d1776 2 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac10a36026b1a24fd6be57792ed2104d4f62c2be685d1776 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.KwF 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.KwF 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.KwF 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6a7b83b63d8bf22f7604c4427d961160 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jRp 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6a7b83b63d8bf22f7604c4427d961160 0 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6a7b83b63d8bf22f7604c4427d961160 0 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6a7b83b63d8bf22f7604c4427d961160 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:47.153 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jRp 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jRp 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jRp 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2bebe14be37ebc2bf27840d0670540ef0bbc133f39c311b0aa53f205ffe7405a 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mPt 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2bebe14be37ebc2bf27840d0670540ef0bbc133f39c311b0aa53f205ffe7405a 3 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2bebe14be37ebc2bf27840d0670540ef0bbc133f39c311b0aa53f205ffe7405a 3 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2bebe14be37ebc2bf27840d0670540ef0bbc133f39c311b0aa53f205ffe7405a 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mPt 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mPt 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mPt 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78903 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78903 ']' 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.411 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bzo 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.IIF ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IIF 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.KqZ 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.m5y ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m5y 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jVl 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.IXC ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IXC 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.KwF 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.669 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jRp ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jRp 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mPt 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:47.670 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:47.928 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:47.928 20:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:48.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.184 Waiting for block devices as requested 00:19:48.440 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.440 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:49.373 No valid GPT data, bailing 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:49.373 No valid GPT data, bailing 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:49.373 No valid GPT data, bailing 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:49.373 No valid GPT data, bailing 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:49.373 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -a 10.0.0.1 -t tcp -s 4420 00:19:49.631 00:19:49.631 Discovery Log Number of Records 2, Generation counter 2 00:19:49.631 =====Discovery Log Entry 0====== 00:19:49.631 trtype: tcp 00:19:49.631 adrfam: ipv4 00:19:49.631 subtype: current discovery subsystem 00:19:49.631 treq: not specified, sq flow control disable supported 00:19:49.631 portid: 1 00:19:49.631 trsvcid: 4420 00:19:49.631 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:49.631 traddr: 10.0.0.1 00:19:49.631 eflags: none 00:19:49.631 sectype: none 00:19:49.631 =====Discovery Log Entry 1====== 00:19:49.631 trtype: tcp 00:19:49.631 adrfam: ipv4 00:19:49.631 subtype: nvme subsystem 00:19:49.631 treq: not specified, sq flow control disable supported 00:19:49.631 portid: 1 00:19:49.631 trsvcid: 4420 00:19:49.631 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:49.631 traddr: 10.0.0.1 00:19:49.631 eflags: none 00:19:49.631 sectype: none 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.631 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.632 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.632 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.632 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.632 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.890 nvme0n1 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.890 nvme0n1 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.890 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.149 20:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.149 nvme0n1 00:19:50.149 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.149 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.149 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.149 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.149 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.149 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.149 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.150 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.408 nvme0n1 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.408 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.667 nvme0n1 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.667 nvme0n1 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.667 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.668 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.235 20:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 nvme0n1 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.235 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.236 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.494 nvme0n1 00:19:51.494 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.494 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.494 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.494 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.494 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.494 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.495 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.754 nvme0n1 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.754 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.014 nvme0n1 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.014 nvme0n1 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.014 20:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.949 nvme0n1 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.949 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.950 20:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.207 nvme0n1 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.207 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.466 nvme0n1 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.466 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.725 nvme0n1 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.725 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.726 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.984 nvme0n1 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.984 20:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.892 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.150 nvme0n1 00:19:56.150 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.150 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.150 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.150 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.150 20:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.150 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.151 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.716 nvme0n1 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.716 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.717 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.975 nvme0n1 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.975 20:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.541 nvme0n1 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.541 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.799 nvme0n1 00:19:57.799 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.799 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.799 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.799 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.799 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.799 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.057 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.058 20:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.624 nvme0n1 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.624 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.625 20:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.191 nvme0n1 00:19:59.191 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.191 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.191 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.191 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.191 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.450 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.017 nvme0n1 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.017 20:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.585 nvme0n1 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.585 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.154 nvme0n1 00:20:01.154 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.154 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.154 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.154 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.154 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.154 20:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.154 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 nvme0n1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 nvme0n1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.414 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.415 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.673 nvme0n1 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.673 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.674 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 nvme0n1 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 nvme0n1 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:01.933 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.934 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.193 20:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.193 nvme0n1 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.193 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.453 nvme0n1 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.453 nvme0n1 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.453 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.736 nvme0n1 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.736 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.737 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 nvme0n1 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.996 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.997 20:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.255 nvme0n1 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.255 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.256 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 nvme0n1 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.515 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.775 nvme0n1 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.775 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.033 nvme0n1 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.034 20:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.295 nvme0n1 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.295 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.560 nvme0n1 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.560 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.128 nvme0n1 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.128 20:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.387 nvme0n1 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.387 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.388 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.955 nvme0n1 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.955 20:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.213 nvme0n1 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:06.213 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.214 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.148 nvme0n1 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.148 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.149 20:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.714 nvme0n1 00:20:07.714 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.714 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.714 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.714 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.714 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.715 20:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.280 nvme0n1 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.280 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:08.281 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.281 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.281 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.281 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.539 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.104 nvme0n1 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.104 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.105 20:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.670 nvme0n1 00:20:09.670 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.670 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.670 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.670 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.670 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.670 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.671 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.671 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.671 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.671 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.930 nvme0n1 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.930 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.931 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 nvme0n1 00:20:10.191 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.191 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.191 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 20:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 nvme0n1 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.191 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.454 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.454 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.454 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.454 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.455 nvme0n1 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.455 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.714 nvme0n1 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.714 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.715 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.715 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.715 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.973 nvme0n1 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.973 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.974 nvme0n1 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.974 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.233 20:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.233 nvme0n1 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.233 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.234 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.492 nvme0n1 00:20:11.492 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.492 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.492 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.492 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.493 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.752 nvme0n1 00:20:11.752 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.752 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.752 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.752 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.753 nvme0n1 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.753 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:12.012 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.013 nvme0n1 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.013 20:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.272 nvme0n1 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.272 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.529 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.529 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.530 nvme0n1 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.530 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.788 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.789 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 nvme0n1 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.047 20:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.305 nvme0n1 00:20:13.305 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.305 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.305 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.305 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.306 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.306 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.306 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.306 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.306 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.306 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:13.563 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.564 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.822 nvme0n1 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.822 20:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.399 nvme0n1 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.399 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.400 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.664 nvme0n1 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.664 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.665 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.231 nvme0n1 00:20:15.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.231 20:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.231 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI1MDVjNDA0ZDI2MWJmMDBmZTgwOTdiYTMwZmE2NGJeHzz7: 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: ]] 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWU1MWQ0Y2YzNTlkOTZkMzViNDVjZWRkODcxZjVhOGFiNDgwYWNiN2Q1MGU0Y2Q1MGY3ZGE3YTkxNTdjNTkxNM9v5xw=: 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.232 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.818 nvme0n1 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.818 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.819 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.819 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.819 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.819 20:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.754 nvme0n1 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.754 20:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.321 nvme0n1 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:17.321 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMxMGEzNjAyNmIxYTI0ZmQ2YmU1Nzc5MmVkMjEwNGQ0ZjYyYzJiZTY4NWQxNzc2av0ZvA==: 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: ]] 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE3YjgzYjYzZDhiZjIyZjc2MDRjNDQyN2Q5NjExNjDXDsMB: 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.322 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.889 nvme0n1 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmJlYmUxNGJlMzdlYmMyYmYyNzg0MGQwNjcwNTQwZWYwYmJjMTMzZjM5YzMxMWIwYWE1M2YyMDVmZmU3NDA1YbG6x3I=: 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.889 20:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.457 nvme0n1 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.457 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.458 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.717 request: 00:20:18.717 { 00:20:18.717 "name": "nvme0", 00:20:18.717 "trtype": "tcp", 00:20:18.717 "traddr": "10.0.0.1", 00:20:18.717 "adrfam": "ipv4", 00:20:18.717 "trsvcid": "4420", 00:20:18.717 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:18.717 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:18.717 "prchk_reftag": false, 00:20:18.717 "prchk_guard": false, 00:20:18.717 "hdgst": false, 00:20:18.717 "ddgst": false, 00:20:18.717 "allow_unrecognized_csi": false, 00:20:18.717 "method": "bdev_nvme_attach_controller", 00:20:18.717 "req_id": 1 00:20:18.717 } 00:20:18.717 Got JSON-RPC error response 00:20:18.717 response: 00:20:18.717 { 00:20:18.717 "code": -5, 00:20:18.717 "message": "Input/output error" 00:20:18.717 } 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.717 request: 00:20:18.717 { 00:20:18.717 "name": "nvme0", 00:20:18.717 "trtype": "tcp", 00:20:18.717 "traddr": "10.0.0.1", 00:20:18.717 "adrfam": "ipv4", 00:20:18.717 "trsvcid": "4420", 00:20:18.717 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:18.717 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:18.717 "prchk_reftag": false, 00:20:18.717 "prchk_guard": false, 00:20:18.717 "hdgst": false, 00:20:18.717 "ddgst": false, 00:20:18.717 "dhchap_key": "key2", 00:20:18.717 "allow_unrecognized_csi": false, 00:20:18.717 "method": "bdev_nvme_attach_controller", 00:20:18.717 "req_id": 1 00:20:18.717 } 00:20:18.717 Got JSON-RPC error response 00:20:18.717 response: 00:20:18.717 { 00:20:18.717 "code": -5, 00:20:18.717 "message": "Input/output error" 00:20:18.717 } 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:18.717 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.718 request: 00:20:18.718 { 00:20:18.718 "name": "nvme0", 00:20:18.718 "trtype": "tcp", 00:20:18.718 "traddr": "10.0.0.1", 00:20:18.718 "adrfam": "ipv4", 00:20:18.718 "trsvcid": "4420", 00:20:18.718 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:18.718 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:18.718 "prchk_reftag": false, 00:20:18.718 "prchk_guard": false, 00:20:18.718 "hdgst": false, 00:20:18.718 "ddgst": false, 00:20:18.718 "dhchap_key": "key1", 00:20:18.718 "dhchap_ctrlr_key": "ckey2", 00:20:18.718 "allow_unrecognized_csi": false, 00:20:18.718 "method": "bdev_nvme_attach_controller", 00:20:18.718 "req_id": 1 00:20:18.718 } 00:20:18.718 Got JSON-RPC error response 00:20:18.718 response: 00:20:18.718 { 00:20:18.718 "code": -5, 00:20:18.718 "message": "Input/output error" 00:20:18.718 } 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.718 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.978 nvme0n1 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.978 request: 00:20:18.978 { 00:20:18.978 "name": "nvme0", 00:20:18.978 "dhchap_key": "key1", 00:20:18.978 "dhchap_ctrlr_key": "ckey2", 00:20:18.978 "method": "bdev_nvme_set_keys", 00:20:18.978 "req_id": 1 00:20:18.978 } 00:20:18.978 Got JSON-RPC error response 00:20:18.978 response: 00:20:18.978 { 00:20:18.978 "code": -13, 00:20:18.978 "message": "Permission denied" 00:20:18.978 } 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:18.978 20:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2ZmE2ZTY0Njc0ZDAwYzIyNTQxYmI1MzA3Y2Q3ODFmZjY2NTlmYjA2ZDZlNGNml+laCg==: 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: ]] 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE1YTkyYjVhZTUzNDU0OTQxY2Y5ODI4NmY1MDVkMDZiNjE0Mzc0MjRmMGE1MWUy29oHFQ==: 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.354 20:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.354 nvme0n1 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTUxZmVjNzQ0YWNmYTRhYzNmNmIyNTZhZWRmNDlhMWJLh0Ry: 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: ]] 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY5ODFhNmFhNGRjNWIxYTI2N2E1YWUyZDRkYmY2Y2YC0VJ4: 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.354 request: 00:20:20.354 { 00:20:20.354 "name": "nvme0", 00:20:20.354 "dhchap_key": "key2", 00:20:20.354 "dhchap_ctrlr_key": "ckey1", 00:20:20.354 "method": "bdev_nvme_set_keys", 00:20:20.354 "req_id": 1 00:20:20.354 } 00:20:20.354 Got JSON-RPC error response 00:20:20.354 response: 00:20:20.354 { 00:20:20.354 "code": -13, 00:20:20.354 "message": "Permission denied" 00:20:20.354 } 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.354 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:20.355 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.355 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.355 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.355 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:20.355 20:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:21.290 rmmod nvme_tcp 00:20:21.290 rmmod nvme_fabrics 00:20:21.290 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78903 ']' 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78903 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78903 ']' 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78903 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78903 00:20:21.549 killing process with pid 78903 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78903' 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78903 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78903 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:21.549 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:21.807 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:22.066 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:22.066 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:22.066 20:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:22.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:22.892 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:22.892 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:22.892 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.bzo /tmp/spdk.key-null.KqZ /tmp/spdk.key-sha256.jVl /tmp/spdk.key-sha384.KwF /tmp/spdk.key-sha512.mPt /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:22.892 20:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:23.495 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.495 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.495 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.495 00:20:23.495 real 0m38.599s 00:20:23.495 user 0m34.808s 00:20:23.495 sys 0m5.072s 00:20:23.495 ************************************ 00:20:23.495 END TEST nvmf_auth_host 00:20:23.495 ************************************ 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.495 ************************************ 00:20:23.495 START TEST nvmf_digest 00:20:23.495 ************************************ 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:23.495 * Looking for test storage... 00:20:23.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:20:23.495 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.753 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:23.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.754 --rc genhtml_branch_coverage=1 00:20:23.754 --rc genhtml_function_coverage=1 00:20:23.754 --rc genhtml_legend=1 00:20:23.754 --rc geninfo_all_blocks=1 00:20:23.754 --rc geninfo_unexecuted_blocks=1 00:20:23.754 00:20:23.754 ' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:23.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.754 --rc genhtml_branch_coverage=1 00:20:23.754 --rc genhtml_function_coverage=1 00:20:23.754 --rc genhtml_legend=1 00:20:23.754 --rc geninfo_all_blocks=1 00:20:23.754 --rc geninfo_unexecuted_blocks=1 00:20:23.754 00:20:23.754 ' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:23.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.754 --rc genhtml_branch_coverage=1 00:20:23.754 --rc genhtml_function_coverage=1 00:20:23.754 --rc genhtml_legend=1 00:20:23.754 --rc geninfo_all_blocks=1 00:20:23.754 --rc geninfo_unexecuted_blocks=1 00:20:23.754 00:20:23.754 ' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:23.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.754 --rc genhtml_branch_coverage=1 00:20:23.754 --rc genhtml_function_coverage=1 00:20:23.754 --rc genhtml_legend=1 00:20:23.754 --rc geninfo_all_blocks=1 00:20:23.754 --rc geninfo_unexecuted_blocks=1 00:20:23.754 00:20:23.754 ' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.754 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:23.754 Cannot find device "nvmf_init_br" 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:23.754 Cannot find device "nvmf_init_br2" 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:23.754 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:23.754 Cannot find device "nvmf_tgt_br" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.755 Cannot find device "nvmf_tgt_br2" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:23.755 Cannot find device "nvmf_init_br" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:23.755 Cannot find device "nvmf_init_br2" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:23.755 Cannot find device "nvmf_tgt_br" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:23.755 Cannot find device "nvmf_tgt_br2" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:23.755 Cannot find device "nvmf_br" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:23.755 Cannot find device "nvmf_init_if" 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:23.755 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:24.013 Cannot find device "nvmf_init_if2" 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:24.013 20:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:24.013 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.013 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:20:24.013 00:20:24.013 --- 10.0.0.3 ping statistics --- 00:20:24.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.013 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:24.013 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:24.270 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:24.271 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:24.271 00:20:24.271 --- 10.0.0.4 ping statistics --- 00:20:24.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.271 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:24.271 00:20:24.271 --- 10.0.0.1 ping statistics --- 00:20:24.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.271 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:24.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:20:24.271 00:20:24.271 --- 10.0.0.2 ping statistics --- 00:20:24.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.271 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.271 ************************************ 00:20:24.271 START TEST nvmf_digest_clean 00:20:24.271 ************************************ 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80557 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80557 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80557 ']' 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.271 20:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:24.271 [2024-11-26 20:48:19.122988] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:24.271 [2024-11-26 20:48:19.123092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.528 [2024-11-26 20:48:19.284504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.528 [2024-11-26 20:48:19.354053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.528 [2024-11-26 20:48:19.354121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.528 [2024-11-26 20:48:19.354136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.528 [2024-11-26 20:48:19.354150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.528 [2024-11-26 20:48:19.354173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.528 [2024-11-26 20:48:19.354589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.465 [2024-11-26 20:48:20.236669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:25.465 null0 00:20:25.465 [2024-11-26 20:48:20.289325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.465 [2024-11-26 20:48:20.313507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80595 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80595 /var/tmp/bperf.sock 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80595 ']' 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:25.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.465 20:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.465 [2024-11-26 20:48:20.378843] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:25.465 [2024-11-26 20:48:20.378951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80595 ] 00:20:25.724 [2024-11-26 20:48:20.539446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.724 [2024-11-26 20:48:20.603659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.657 20:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.658 20:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:26.658 20:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:26.658 20:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:26.658 20:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:26.916 [2024-11-26 20:48:21.825123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.916 20:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.916 20:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.481 nvme0n1 00:20:27.481 20:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:27.481 20:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:27.739 Running I/O for 2 seconds... 00:20:29.612 14097.00 IOPS, 55.07 MiB/s [2024-11-26T20:48:24.605Z] 13844.50 IOPS, 54.08 MiB/s 00:20:29.612 Latency(us) 00:20:29.612 [2024-11-26T20:48:24.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.612 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:29.612 nvme0n1 : 2.01 13840.17 54.06 0.00 0.00 9242.69 2293.76 25340.59 00:20:29.612 [2024-11-26T20:48:24.605Z] =================================================================================================================== 00:20:29.612 [2024-11-26T20:48:24.605Z] Total : 13840.17 54.06 0.00 0.00 9242.69 2293.76 25340.59 00:20:29.612 { 00:20:29.612 "results": [ 00:20:29.612 { 00:20:29.612 "job": "nvme0n1", 00:20:29.612 "core_mask": "0x2", 00:20:29.612 "workload": "randread", 00:20:29.612 "status": "finished", 00:20:29.612 "queue_depth": 128, 00:20:29.612 "io_size": 4096, 00:20:29.612 "runtime": 2.009874, 00:20:29.612 "iops": 13840.171075400747, 00:20:29.612 "mibps": 54.06316826328417, 00:20:29.612 "io_failed": 0, 00:20:29.612 "io_timeout": 0, 00:20:29.612 "avg_latency_us": 9242.691660495382, 00:20:29.612 "min_latency_us": 2293.76, 00:20:29.612 "max_latency_us": 25340.586666666666 00:20:29.612 } 00:20:29.612 ], 00:20:29.612 "core_count": 1 00:20:29.612 } 00:20:29.612 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:29.612 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:29.612 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:29.612 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:29.612 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:29.612 | select(.opcode=="crc32c") 00:20:29.612 | "\(.module_name) \(.executed)"' 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80595 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80595 ']' 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80595 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80595 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.194 killing process with pid 80595 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80595' 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80595 00:20:30.194 Received shutdown signal, test time was about 2.000000 seconds 00:20:30.194 00:20:30.194 Latency(us) 00:20:30.194 [2024-11-26T20:48:25.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.194 [2024-11-26T20:48:25.187Z] =================================================================================================================== 00:20:30.194 [2024-11-26T20:48:25.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.194 20:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80595 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80659 00:20:30.194 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:30.195 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80659 /var/tmp/bperf.sock 00:20:30.195 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80659 ']' 00:20:30.195 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.195 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.195 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.195 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.195 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.453 [2024-11-26 20:48:25.205541] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:30.453 [2024-11-26 20:48:25.205647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80659 ] 00:20:30.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:30.453 Zero copy mechanism will not be used. 00:20:30.453 [2024-11-26 20:48:25.362569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.453 [2024-11-26 20:48:25.421326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.710 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.710 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:30.710 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:30.710 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:30.710 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:30.968 [2024-11-26 20:48:25.847093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.968 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.968 20:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.533 nvme0n1 00:20:31.533 20:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:31.533 20:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:31.533 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:31.533 Zero copy mechanism will not be used. 00:20:31.533 Running I/O for 2 seconds... 00:20:33.846 8576.00 IOPS, 1072.00 MiB/s [2024-11-26T20:48:28.839Z] 8632.00 IOPS, 1079.00 MiB/s 00:20:33.846 Latency(us) 00:20:33.846 [2024-11-26T20:48:28.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.846 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:33.846 nvme0n1 : 2.00 8631.64 1078.95 0.00 0.00 1850.89 1716.42 7864.32 00:20:33.846 [2024-11-26T20:48:28.839Z] =================================================================================================================== 00:20:33.846 [2024-11-26T20:48:28.839Z] Total : 8631.64 1078.95 0.00 0.00 1850.89 1716.42 7864.32 00:20:33.846 { 00:20:33.846 "results": [ 00:20:33.846 { 00:20:33.846 "job": "nvme0n1", 00:20:33.846 "core_mask": "0x2", 00:20:33.846 "workload": "randread", 00:20:33.846 "status": "finished", 00:20:33.846 "queue_depth": 16, 00:20:33.846 "io_size": 131072, 00:20:33.846 "runtime": 2.001938, 00:20:33.846 "iops": 8631.635944769518, 00:20:33.846 "mibps": 1078.9544930961897, 00:20:33.846 "io_failed": 0, 00:20:33.846 "io_timeout": 0, 00:20:33.846 "avg_latency_us": 1850.8919647266316, 00:20:33.846 "min_latency_us": 1716.4190476190477, 00:20:33.846 "max_latency_us": 7864.32 00:20:33.846 } 00:20:33.846 ], 00:20:33.846 "core_count": 1 00:20:33.846 } 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:33.846 | select(.opcode=="crc32c") 00:20:33.846 | "\(.module_name) \(.executed)"' 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80659 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80659 ']' 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80659 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80659 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.846 killing process with pid 80659 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80659' 00:20:33.846 Received shutdown signal, test time was about 2.000000 seconds 00:20:33.846 00:20:33.846 Latency(us) 00:20:33.846 [2024-11-26T20:48:28.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.846 [2024-11-26T20:48:28.839Z] =================================================================================================================== 00:20:33.846 [2024-11-26T20:48:28.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80659 00:20:33.846 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80659 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80713 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80713 /var/tmp/bperf.sock 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80713 ']' 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.105 20:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:34.105 [2024-11-26 20:48:28.995979] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:34.105 [2024-11-26 20:48:28.996115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80713 ] 00:20:34.363 [2024-11-26 20:48:29.155461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.363 [2024-11-26 20:48:29.211589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.299 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.299 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:35.299 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:35.299 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:35.299 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:35.558 [2024-11-26 20:48:30.377265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:35.558 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.558 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.815 nvme0n1 00:20:36.073 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:36.073 20:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:36.073 Running I/O for 2 seconds... 00:20:37.937 16130.00 IOPS, 63.01 MiB/s [2024-11-26T20:48:32.930Z] 15748.50 IOPS, 61.52 MiB/s 00:20:37.937 Latency(us) 00:20:37.937 [2024-11-26T20:48:32.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.937 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:37.937 nvme0n1 : 2.01 15792.49 61.69 0.00 0.00 8097.90 2402.99 20846.69 00:20:37.937 [2024-11-26T20:48:32.930Z] =================================================================================================================== 00:20:37.937 [2024-11-26T20:48:32.930Z] Total : 15792.49 61.69 0.00 0.00 8097.90 2402.99 20846.69 00:20:37.937 { 00:20:37.937 "results": [ 00:20:37.937 { 00:20:37.937 "job": "nvme0n1", 00:20:37.937 "core_mask": "0x2", 00:20:37.937 "workload": "randwrite", 00:20:37.937 "status": "finished", 00:20:37.937 "queue_depth": 128, 00:20:37.937 "io_size": 4096, 00:20:37.937 "runtime": 2.010576, 00:20:37.937 "iops": 15792.489316494377, 00:20:37.937 "mibps": 61.68941139255616, 00:20:37.937 "io_failed": 0, 00:20:37.937 "io_timeout": 0, 00:20:37.937 "avg_latency_us": 8097.901686642911, 00:20:37.937 "min_latency_us": 2402.9866666666667, 00:20:37.937 "max_latency_us": 20846.689523809524 00:20:37.937 } 00:20:37.937 ], 00:20:37.937 "core_count": 1 00:20:37.937 } 00:20:38.195 20:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:38.195 20:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:38.195 20:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:38.195 20:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:38.195 | select(.opcode=="crc32c") 00:20:38.195 | "\(.module_name) \(.executed)"' 00:20:38.195 20:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:38.453 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:38.453 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:38.453 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80713 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80713 ']' 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80713 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80713 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.454 killing process with pid 80713 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80713' 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80713 00:20:38.454 Received shutdown signal, test time was about 2.000000 seconds 00:20:38.454 00:20:38.454 Latency(us) 00:20:38.454 [2024-11-26T20:48:33.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.454 [2024-11-26T20:48:33.447Z] =================================================================================================================== 00:20:38.454 [2024-11-26T20:48:33.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.454 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80713 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80769 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80769 /var/tmp/bperf.sock 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80769 ']' 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:38.712 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:38.712 [2024-11-26 20:48:33.527274] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:38.712 [2024-11-26 20:48:33.527393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80769 ] 00:20:38.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:38.712 Zero copy mechanism will not be used. 00:20:38.712 [2024-11-26 20:48:33.688745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.971 [2024-11-26 20:48:33.748338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.971 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.971 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:38.971 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:38.971 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:38.971 20:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:39.230 [2024-11-26 20:48:34.046182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:39.230 20:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.230 20:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.488 nvme0n1 00:20:39.488 20:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:39.488 20:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:39.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:39.747 Zero copy mechanism will not be used. 00:20:39.747 Running I/O for 2 seconds... 00:20:41.619 8828.00 IOPS, 1103.50 MiB/s [2024-11-26T20:48:36.612Z] 8771.00 IOPS, 1096.38 MiB/s 00:20:41.619 Latency(us) 00:20:41.619 [2024-11-26T20:48:36.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.619 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:41.619 nvme0n1 : 2.00 8764.98 1095.62 0.00 0.00 1821.86 1271.71 8488.47 00:20:41.619 [2024-11-26T20:48:36.612Z] =================================================================================================================== 00:20:41.619 [2024-11-26T20:48:36.612Z] Total : 8764.98 1095.62 0.00 0.00 1821.86 1271.71 8488.47 00:20:41.619 { 00:20:41.619 "results": [ 00:20:41.619 { 00:20:41.619 "job": "nvme0n1", 00:20:41.619 "core_mask": "0x2", 00:20:41.619 "workload": "randwrite", 00:20:41.619 "status": "finished", 00:20:41.619 "queue_depth": 16, 00:20:41.619 "io_size": 131072, 00:20:41.619 "runtime": 2.003998, 00:20:41.619 "iops": 8764.97880736408, 00:20:41.619 "mibps": 1095.62235092051, 00:20:41.619 "io_failed": 0, 00:20:41.619 "io_timeout": 0, 00:20:41.619 "avg_latency_us": 1821.8555831537283, 00:20:41.619 "min_latency_us": 1271.7104761904761, 00:20:41.619 "max_latency_us": 8488.47238095238 00:20:41.619 } 00:20:41.619 ], 00:20:41.619 "core_count": 1 00:20:41.619 } 00:20:41.619 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:41.619 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:41.619 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:41.619 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:41.619 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:41.619 | select(.opcode=="crc32c") 00:20:41.619 | "\(.module_name) \(.executed)"' 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80769 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80769 ']' 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80769 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.878 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80769 00:20:42.135 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.135 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.135 killing process with pid 80769 00:20:42.135 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80769' 00:20:42.135 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80769 00:20:42.135 Received shutdown signal, test time was about 2.000000 seconds 00:20:42.135 00:20:42.135 Latency(us) 00:20:42.135 [2024-11-26T20:48:37.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.135 [2024-11-26T20:48:37.128Z] =================================================================================================================== 00:20:42.135 [2024-11-26T20:48:37.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.135 20:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80769 00:20:42.135 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80557 00:20:42.135 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80557 ']' 00:20:42.135 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80557 00:20:42.135 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:42.135 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.135 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80557 00:20:42.135 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.136 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.136 killing process with pid 80557 00:20:42.136 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80557' 00:20:42.136 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80557 00:20:42.136 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80557 00:20:42.393 00:20:42.393 real 0m18.260s 00:20:42.393 user 0m35.044s 00:20:42.393 sys 0m5.762s 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 ************************************ 00:20:42.393 END TEST nvmf_digest_clean 00:20:42.393 ************************************ 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 ************************************ 00:20:42.393 START TEST nvmf_digest_error 00:20:42.393 ************************************ 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80850 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80850 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80850 ']' 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.393 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.650 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.650 [2024-11-26 20:48:37.443500] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:42.650 [2024-11-26 20:48:37.443604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.650 [2024-11-26 20:48:37.597237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.908 [2024-11-26 20:48:37.661455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.908 [2024-11-26 20:48:37.661710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.908 [2024-11-26 20:48:37.661991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.908 [2024-11-26 20:48:37.662257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.908 [2024-11-26 20:48:37.662378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.908 [2024-11-26 20:48:37.662937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.908 [2024-11-26 20:48:37.763623] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.908 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.908 [2024-11-26 20:48:37.824969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:42.908 null0 00:20:42.908 [2024-11-26 20:48:37.883490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.166 [2024-11-26 20:48:37.907649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80875 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80875 /var/tmp/bperf.sock 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80875 ']' 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:43.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.166 20:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.166 [2024-11-26 20:48:37.975531] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:43.166 [2024-11-26 20:48:37.976192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80875 ] 00:20:43.166 [2024-11-26 20:48:38.134828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.423 [2024-11-26 20:48:38.196491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.423 [2024-11-26 20:48:38.250419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.423 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.423 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:43.423 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.423 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.681 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:43.681 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.681 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.681 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.681 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.681 20:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:44.274 nvme0n1 00:20:44.274 20:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:44.274 20:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.274 20:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:44.274 20:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.274 20:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:44.274 20:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:44.274 Running I/O for 2 seconds... 00:20:44.274 [2024-11-26 20:48:39.188289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.275 [2024-11-26 20:48:39.188354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.275 [2024-11-26 20:48:39.188371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.275 [2024-11-26 20:48:39.205123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.275 [2024-11-26 20:48:39.205183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.275 [2024-11-26 20:48:39.205198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.275 [2024-11-26 20:48:39.221921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.275 [2024-11-26 20:48:39.221960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.275 [2024-11-26 20:48:39.221973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.275 [2024-11-26 20:48:39.238375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.275 [2024-11-26 20:48:39.238561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.275 [2024-11-26 20:48:39.238580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.275 [2024-11-26 20:48:39.255269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.275 [2024-11-26 20:48:39.255316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.275 [2024-11-26 20:48:39.255330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.533 [2024-11-26 20:48:39.272132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.533 [2024-11-26 20:48:39.272177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.533 [2024-11-26 20:48:39.272191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.533 [2024-11-26 20:48:39.288963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.533 [2024-11-26 20:48:39.289007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.533 [2024-11-26 20:48:39.289021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.533 [2024-11-26 20:48:39.305871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.533 [2024-11-26 20:48:39.305904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.533 [2024-11-26 20:48:39.305916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.533 [2024-11-26 20:48:39.322652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.533 [2024-11-26 20:48:39.322685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.322698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.339512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.339544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.339557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.356210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.356243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.356256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.372790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.372822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.372835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.389208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.389240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.389254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.406009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.406043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.406055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.422617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.422650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.422679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.440373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.440418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.440435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.457025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.457064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.457077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.473438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.473475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.473488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.489173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.489208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.489221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.505723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.505760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.505773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.534 [2024-11-26 20:48:39.522468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.534 [2024-11-26 20:48:39.522502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.534 [2024-11-26 20:48:39.522515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.539234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.792 [2024-11-26 20:48:39.539267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.792 [2024-11-26 20:48:39.539280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.555725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.792 [2024-11-26 20:48:39.555758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.792 [2024-11-26 20:48:39.555771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.572335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.792 [2024-11-26 20:48:39.572378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.792 [2024-11-26 20:48:39.572392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.588837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.792 [2024-11-26 20:48:39.588869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.792 [2024-11-26 20:48:39.588882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.605368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.792 [2024-11-26 20:48:39.605403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.792 [2024-11-26 20:48:39.605416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.621789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.792 [2024-11-26 20:48:39.621822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.792 [2024-11-26 20:48:39.621835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.638717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.792 [2024-11-26 20:48:39.638755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.792 [2024-11-26 20:48:39.638769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.792 [2024-11-26 20:48:39.655463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.655517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.655530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.793 [2024-11-26 20:48:39.672085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.672120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.672133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.793 [2024-11-26 20:48:39.688903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.688937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.688951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.793 [2024-11-26 20:48:39.705548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.705582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.705596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.793 [2024-11-26 20:48:39.722791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.722831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.722845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.793 [2024-11-26 20:48:39.739268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.739319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.739335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.793 [2024-11-26 20:48:39.754757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.754793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.754806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.793 [2024-11-26 20:48:39.770045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:44.793 [2024-11-26 20:48:39.770076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.793 [2024-11-26 20:48:39.770087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.785126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.785167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.785179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.801593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.801629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.801642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.818359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.818393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.818406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.835135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.835180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.835194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.852007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.852042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.852055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.868543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.868574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.868586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.884604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.884635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.884647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.901181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.901223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.901237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.918100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.918131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.918144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.934143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.934182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.934211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.951051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.951085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.951098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.967078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.967112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.967125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.983473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.983504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.983516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:39.999744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:39.999777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:39.999789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:40.016245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:40.016278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:40.016291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.052 [2024-11-26 20:48:40.032743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.052 [2024-11-26 20:48:40.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.052 [2024-11-26 20:48:40.032813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.049311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.049344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.049356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.064859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.064885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.064912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.080280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.080313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.080325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.096571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.096604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.096617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.112183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.112213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.112225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.127895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.127929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.127942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.144112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.144146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.144172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.159931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.159975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.159987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 15307.00 IOPS, 59.79 MiB/s [2024-11-26T20:48:40.304Z] [2024-11-26 20:48:40.177951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.177982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.177994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.193621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.193667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.193678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.209563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.311 [2024-11-26 20:48:40.209595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.311 [2024-11-26 20:48:40.209607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.311 [2024-11-26 20:48:40.231863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.312 [2024-11-26 20:48:40.231896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.312 [2024-11-26 20:48:40.231908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.312 [2024-11-26 20:48:40.247425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.312 [2024-11-26 20:48:40.247461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.312 [2024-11-26 20:48:40.247473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.312 [2024-11-26 20:48:40.263523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.312 [2024-11-26 20:48:40.263556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.312 [2024-11-26 20:48:40.263569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.312 [2024-11-26 20:48:40.280187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.312 [2024-11-26 20:48:40.280221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.312 [2024-11-26 20:48:40.280234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.312 [2024-11-26 20:48:40.296052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.312 [2024-11-26 20:48:40.296086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.312 [2024-11-26 20:48:40.296099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.311975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.312009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.312022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.327895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.327928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.327941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.344177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.344210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.344222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.360252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.360286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.360299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.375628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.375662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.375675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.391352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.391387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.391399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.407824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.407858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.407871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.423429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.423460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.423473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.439091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.439124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.439137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.455268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.455309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.455338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.471459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.471493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.471506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.487833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.487866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.487878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.503604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.503636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.503649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.518778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.518810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.518822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.570 [2024-11-26 20:48:40.533321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.570 [2024-11-26 20:48:40.533349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.570 [2024-11-26 20:48:40.533376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.571 [2024-11-26 20:48:40.548964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.571 [2024-11-26 20:48:40.548996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.571 [2024-11-26 20:48:40.549009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.566066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.566100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.566113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.583162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.583203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.583216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.599696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.599728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.599741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.616466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.616496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.616507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.633975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.634028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.634049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.650791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.650829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.650844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.667810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.667847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.667861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.684834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.684869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.684882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.701790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.701825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.701839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.718576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.718611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.718625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.735352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.735385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.735398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.753024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.753060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.753074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.769933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.769974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.769989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.786733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.786767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.786780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.803105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.803139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.803152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:45.830 [2024-11-26 20:48:40.819646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:45.830 [2024-11-26 20:48:40.819680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.830 [2024-11-26 20:48:40.819692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.836461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.836505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.836518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.853448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.853482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.853496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.869939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.869971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.870000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.886320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.886352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.886364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.903371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.903427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.903441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.920494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.920530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.920543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.937035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.937068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.937080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.088 [2024-11-26 20:48:40.953838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.088 [2024-11-26 20:48:40.953870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.088 [2024-11-26 20:48:40.953883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.089 [2024-11-26 20:48:40.970527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.089 [2024-11-26 20:48:40.970560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.089 [2024-11-26 20:48:40.970573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.089 [2024-11-26 20:48:40.987482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.089 [2024-11-26 20:48:40.987515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.089 [2024-11-26 20:48:40.987528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.089 [2024-11-26 20:48:41.004384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.089 [2024-11-26 20:48:41.004418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.089 [2024-11-26 20:48:41.004431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.089 [2024-11-26 20:48:41.021128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.089 [2024-11-26 20:48:41.021187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.089 [2024-11-26 20:48:41.021201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.089 [2024-11-26 20:48:41.038207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.089 [2024-11-26 20:48:41.038245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.089 [2024-11-26 20:48:41.038270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.089 [2024-11-26 20:48:41.055379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.089 [2024-11-26 20:48:41.055412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.089 [2024-11-26 20:48:41.055426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.089 [2024-11-26 20:48:41.072375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.089 [2024-11-26 20:48:41.072409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.089 [2024-11-26 20:48:41.072422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.347 [2024-11-26 20:48:41.089415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.347 [2024-11-26 20:48:41.089458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.347 [2024-11-26 20:48:41.089471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.347 [2024-11-26 20:48:41.106504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.347 [2024-11-26 20:48:41.106537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.347 [2024-11-26 20:48:41.106549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.347 [2024-11-26 20:48:41.123610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.347 [2024-11-26 20:48:41.123645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.347 [2024-11-26 20:48:41.123659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.347 [2024-11-26 20:48:41.140619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.347 [2024-11-26 20:48:41.140651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.347 [2024-11-26 20:48:41.140664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.347 [2024-11-26 20:48:41.157581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20cdfb0) 00:20:46.347 [2024-11-26 20:48:41.157616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.347 [2024-11-26 20:48:41.157629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:46.347 15307.00 IOPS, 59.79 MiB/s 00:20:46.347 Latency(us) 00:20:46.347 [2024-11-26T20:48:41.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.347 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:46.347 nvme0n1 : 2.01 15326.79 59.87 0.00 0.00 8346.15 7021.71 29959.31 00:20:46.347 [2024-11-26T20:48:41.340Z] =================================================================================================================== 00:20:46.347 [2024-11-26T20:48:41.340Z] Total : 15326.79 59.87 0.00 0.00 8346.15 7021.71 29959.31 00:20:46.347 { 00:20:46.347 "results": [ 00:20:46.347 { 00:20:46.347 "job": "nvme0n1", 00:20:46.347 "core_mask": "0x2", 00:20:46.347 "workload": "randread", 00:20:46.347 "status": "finished", 00:20:46.347 "queue_depth": 128, 00:20:46.347 "io_size": 4096, 00:20:46.347 "runtime": 2.005769, 00:20:46.347 "iops": 15326.789874606697, 00:20:46.347 "mibps": 59.87027294768241, 00:20:46.347 "io_failed": 0, 00:20:46.347 "io_timeout": 0, 00:20:46.347 "avg_latency_us": 8346.151953926845, 00:20:46.347 "min_latency_us": 7021.714285714285, 00:20:46.347 "max_latency_us": 29959.314285714285 00:20:46.347 } 00:20:46.347 ], 00:20:46.347 "core_count": 1 00:20:46.347 } 00:20:46.347 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:46.347 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:46.347 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:46.347 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:46.347 | .driver_specific 00:20:46.347 | .nvme_error 00:20:46.347 | .status_code 00:20:46.347 | .command_transient_transport_error' 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80875 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80875 ']' 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80875 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80875 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.606 killing process with pid 80875 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80875' 00:20:46.606 Received shutdown signal, test time was about 2.000000 seconds 00:20:46.606 00:20:46.606 Latency(us) 00:20:46.606 [2024-11-26T20:48:41.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.606 [2024-11-26T20:48:41.599Z] =================================================================================================================== 00:20:46.606 [2024-11-26T20:48:41.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80875 00:20:46.606 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80875 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80922 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80922 /var/tmp/bperf.sock 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80922 ']' 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.865 20:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.865 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:46.865 Zero copy mechanism will not be used. 00:20:46.865 [2024-11-26 20:48:41.760650] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:46.865 [2024-11-26 20:48:41.760737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80922 ] 00:20:47.123 [2024-11-26 20:48:41.904504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.123 [2024-11-26 20:48:41.952769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.124 [2024-11-26 20:48:41.995076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:47.124 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.124 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:47.124 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.124 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.381 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.381 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.382 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.382 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.382 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.382 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.948 nvme0n1 00:20:47.948 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:47.948 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.948 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.948 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.948 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:47.948 20:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:47.949 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:47.949 Zero copy mechanism will not be used. 00:20:47.949 Running I/O for 2 seconds... 00:20:47.949 [2024-11-26 20:48:42.810569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.810622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.810636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.814439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.814475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.814487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.818277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.818309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.818320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.822069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.822099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.822110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.825855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.825885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.825896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.829574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.829605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.829615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.833267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.833293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.833304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.836982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.837011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.837021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.840710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.840739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.840749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.844427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.844467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.844477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.848189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.848219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.848230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.851881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.851913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.851924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.855612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.855643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.855653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.859342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.859370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.859381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.863101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.863130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.863141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.866823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.866853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.866863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.870566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.870596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.870607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.874285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.874310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.874321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.878043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.878073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.878083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.881785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.881814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.881824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.885519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.885547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.885557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.889241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.889268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.889278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.892913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.892943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.892953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.896657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.896690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.896701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.900418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.900448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.900458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.949 [2024-11-26 20:48:42.904099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.949 [2024-11-26 20:48:42.904130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.949 [2024-11-26 20:48:42.904141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.907858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.907889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.907900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.911609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.911639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.911650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.915405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.915434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.915444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.919222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.919252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.919263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.922990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.923020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.923031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.926764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.926793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.926803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.930599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.930628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.930639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.934411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.934441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.934451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.950 [2024-11-26 20:48:42.938218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:47.950 [2024-11-26 20:48:42.938245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.950 [2024-11-26 20:48:42.938256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.210 [2024-11-26 20:48:42.942018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.210 [2024-11-26 20:48:42.942048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.210 [2024-11-26 20:48:42.942059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.210 [2024-11-26 20:48:42.945829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.210 [2024-11-26 20:48:42.945859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.210 [2024-11-26 20:48:42.945870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.210 [2024-11-26 20:48:42.949694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.210 [2024-11-26 20:48:42.949725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.210 [2024-11-26 20:48:42.949736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.210 [2024-11-26 20:48:42.953444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.210 [2024-11-26 20:48:42.953473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.210 [2024-11-26 20:48:42.953484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.210 [2024-11-26 20:48:42.957298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.210 [2024-11-26 20:48:42.957327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.210 [2024-11-26 20:48:42.957338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.210 [2024-11-26 20:48:42.961212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.961242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.961253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.965088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.965118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.965129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.969011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.969042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.972976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.973004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.973015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.976869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.976898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.976908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.980820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.980849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.980860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.984767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.984796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.984807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.988656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.988687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.988698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.992624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.992657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.992668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:42.996681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:42.996712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:42.996723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.000678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.000723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.000733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.004646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.004675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.004686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.008585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.008628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.008656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.012709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.012741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.012768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.016737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.016768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.016794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.020867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.020900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.020912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.024992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.025024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.025035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.029132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.029173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.029185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.033192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.033222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.033234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.037324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.037356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.037368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.041274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.041304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.041316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.045259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.045288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.045299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.049015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.049045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.049055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.052812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.052842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.052852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.056740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.056771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.056782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.060656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.060686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.060696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.211 [2024-11-26 20:48:43.064563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.211 [2024-11-26 20:48:43.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.211 [2024-11-26 20:48:43.064604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.068429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.068472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.068483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.072369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.072411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.072422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.076280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.076313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.076326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.080297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.080330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.080342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.084202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.084233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.084245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.088077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.088107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.088117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.091979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.092009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.092020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.095809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.095840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.095852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.099715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.099745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.099757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.103630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.103661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.103673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.107527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.107559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.107571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.111446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.111477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.111489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.115356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.115385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.115397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.119153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.119190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.119200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.122993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.123022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.123032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.126879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.126908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.126935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.130924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.130955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.130967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.134885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.134916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.134927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.139041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.139073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.139086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.143128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.143166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.212 [2024-11-26 20:48:43.143194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.212 [2024-11-26 20:48:43.147140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.212 [2024-11-26 20:48:43.147176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.147205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.151264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.151292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.151326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.155191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.155218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.155245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.159046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.159075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.159085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.163079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.163111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.163122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.166884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.166914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.166925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.170623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.170654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.170665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.174412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.174442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.174453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.178131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.178173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.178184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.181841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.181869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.181880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.185707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.185736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.185747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.189458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.189488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.189499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.193224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.193252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.193263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.213 [2024-11-26 20:48:43.196959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.213 [2024-11-26 20:48:43.196989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.213 [2024-11-26 20:48:43.197000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.201040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.201073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.201085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.205069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.205099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.205127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.209076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.209108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.209120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.213080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.213111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.213123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.217157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.217198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.217210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.221165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.221194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.225104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.225135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.225146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.229056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.229086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.229098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.233156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.233196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.233208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.237101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.237132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.237143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.241096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.241127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.241139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.245097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.245128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.245155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.249047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.249077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.474 [2024-11-26 20:48:43.249103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.474 [2024-11-26 20:48:43.253002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.474 [2024-11-26 20:48:43.253032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.253059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.256945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.256973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.256984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.260856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.260888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.260900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.264757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.264787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.264798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.268736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.268765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.268776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.272666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.272695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.272706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.276607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.276636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.276646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.280571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.280601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.280612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.284446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.284476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.284488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.288289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.288320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.288331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.292203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.292235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.292247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.296095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.296128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.296139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.299918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.299950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.299962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.303775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.303806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.303816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.307718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.307752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.307763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.311591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.311623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.311634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.315437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.315468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.315478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.319322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.319352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.319362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.323153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.323197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.323209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.327000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.327031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.327042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.330827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.330857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.330867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.334651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.475 [2024-11-26 20:48:43.334682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.475 [2024-11-26 20:48:43.334693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.475 [2024-11-26 20:48:43.338590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.338620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.338631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.342409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.342439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.342450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.346255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.346282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.346293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.350072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.350101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.350111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.353984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.354014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.354025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.357802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.357831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.357842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.361702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.361732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.361743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.365705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.365736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.365748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.369722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.369751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.369762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.373681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.373710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.373720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.377516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.377545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.377556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.381233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.381260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.381270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.385143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.385180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.385190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.388983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.389012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.389022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.392769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.392798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.392808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.396505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.396534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.396545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.400216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.400244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.400255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.403944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.403973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.403984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.407669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.407699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.407710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.411439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.411468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.411480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.415174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.415201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.415211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.418893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.418922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.476 [2024-11-26 20:48:43.418934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.476 [2024-11-26 20:48:43.422637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.476 [2024-11-26 20:48:43.422666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.422676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.426381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.426411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.426422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.430179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.430207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.430218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.433992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.434024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.434035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.437822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.437853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.437863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.441644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.441674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.441684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.445597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.445626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.445637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.449422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.449450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.449461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.453207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.453234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.453245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.457013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.457042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.457052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.477 [2024-11-26 20:48:43.460854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.477 [2024-11-26 20:48:43.460883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.477 [2024-11-26 20:48:43.460894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.464673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.464702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.464712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.468421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.468450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.468461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.472186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.472214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.472225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.475898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.475928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.475938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.479622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.479652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.479663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.483422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.483451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.483461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.487075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.487102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.487128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.490831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.490859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.490870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.494535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.494562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.494589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.498287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.498315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.498336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.501982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.502010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.502035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.505771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.505799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.505809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.509494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.509523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.509549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.513222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.513252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.513262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.516954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.516983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.516994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.520697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.520727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.520737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.524441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.524470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.524481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.528227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.738 [2024-11-26 20:48:43.528254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.738 [2024-11-26 20:48:43.528264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.738 [2024-11-26 20:48:43.531878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.531908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.531918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.535694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.535723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.535734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.539488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.539517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.539527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.543316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.543343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.543354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.547081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.547110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.547136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.550992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.551021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.551031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.554980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.555008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.555019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.558829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.558858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.558869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.562637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.562666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.562676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.566492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.566521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.566531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.570296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.570325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.570335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.574098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.574127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.574138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.577827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.577856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.577867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.581528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.581557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.581568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.585297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.585334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.585345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.589038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.589067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.589077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.592770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.592799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.592810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.596578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.596606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.596617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.600317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.600346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.600356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.604079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.604109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.604120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.607860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.607890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.607900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.611560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.611589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.611600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.615334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.615361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.615372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.618992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.619019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.619030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.622705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.622734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.622744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.626428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.626457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.626468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.630135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.739 [2024-11-26 20:48:43.630178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.739 [2024-11-26 20:48:43.630188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.739 [2024-11-26 20:48:43.633834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.633863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.633874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.637545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.637573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.637584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.641282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.641308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.641318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.645032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.645071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.648763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.648791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.648801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.652525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.652553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.652563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.656270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.656300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.656311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.659963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.659993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.660004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.663687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.663717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.663728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.667422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.667457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.667468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.671088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.671118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.671128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.674819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.674848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.674858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.678512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.678541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.678551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.682180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.682206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.682217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.685888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.685916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.685926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.689642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.689672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.689682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.693415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.693444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.693470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.697144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.697181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.697191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.700899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.700928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.700938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.704620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.704649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.704659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.708368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.708397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.708408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.712111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.712141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.712151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.715866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.715896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.715907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.719607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.719636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.719646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.723309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.723335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.723346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.740 [2024-11-26 20:48:43.727080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:48.740 [2024-11-26 20:48:43.727109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.740 [2024-11-26 20:48:43.727119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.730879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.730908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.734705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.734734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.734745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.738479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.738508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.738518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.742249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.742277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.742287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.746064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.746092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.746118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.749840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.749869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.749880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.753589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.753617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.753628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.757348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.757375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.757401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.761092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.761120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.761146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.002 [2024-11-26 20:48:43.764899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.002 [2024-11-26 20:48:43.764927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.002 [2024-11-26 20:48:43.764937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.768653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.768682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.768692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.772428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.772468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.772479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.776184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.776211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.776221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.779880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.779910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.779920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.783605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.783635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.783645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.787312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.787353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.787364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.790994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.791022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.791032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.794684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.794713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.794723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.798438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.798468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.798479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.802243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.802271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.802297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.806062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.806091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.806102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.003 8075.00 IOPS, 1009.38 MiB/s [2024-11-26T20:48:43.996Z] [2024-11-26 20:48:43.811182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.811209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.811220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.814883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.814913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.814924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.818706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.818734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.818745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.822487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.822516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.822527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.826210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.826238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.826248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.829939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.829968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.829978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.833673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.833701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.833712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.837383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.837411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.837422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.841135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.841176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.841187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.844855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.844885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.844895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.848616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.003 [2024-11-26 20:48:43.848645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.003 [2024-11-26 20:48:43.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.003 [2024-11-26 20:48:43.852325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.852354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.852364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.856127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.856166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.856177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.859832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.859861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.859872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.863565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.863594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.863605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.867313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.867341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.867352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.871099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.871129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.871140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.874878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.874909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.874920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.878734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.878764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.878790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.882550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.882581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.882591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.886399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.886429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.886440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.890128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.890182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.890193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.894004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.894032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.894042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.897747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.897775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.897801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.901481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.901509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.901534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.905247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.905274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.905284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.908957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.908985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.908995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.912713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.912741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.912751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.916492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.916521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.920296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.920325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.920335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.924031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.924060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.924071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.927778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.927808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.927818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.931516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.004 [2024-11-26 20:48:43.931545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.004 [2024-11-26 20:48:43.931556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.004 [2024-11-26 20:48:43.935210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.935237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.935247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.938914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.938942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.938952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.942602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.942629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.942640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.946408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.946436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.946446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.950152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.950189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.950200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.953877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.953905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.953915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.957617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.957645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.957655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.961400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.961428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.961438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.965121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.965150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.965172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.968964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.968992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.969002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.972730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.972758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.972783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.976442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.976471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.976481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.980232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.980260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.980270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.983918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.983947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.983958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.987612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.987641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.987651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.005 [2024-11-26 20:48:43.991356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.005 [2024-11-26 20:48:43.991385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.005 [2024-11-26 20:48:43.991397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:43.995127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:43.995181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:43.995192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:43.998846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:43.998874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:43.998885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.002652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.002680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.002691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.006537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.006568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.006579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.010312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.010343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.010354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.014141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.014195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.014206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.018057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.018087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.018114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.021845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.021876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.021902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.025646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.025675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.025702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.029399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.029427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.266 [2024-11-26 20:48:44.029454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.266 [2024-11-26 20:48:44.033137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.266 [2024-11-26 20:48:44.033174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.033201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.036954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.036983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.036994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.040743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.040772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.040798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.044511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.044541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.044552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.048257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.048286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.048296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.051953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.051983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.051993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.055670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.055700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.055710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.059443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.059471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.059481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.063176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.063218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.063229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.066852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.066880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.066906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.070581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.070610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.070635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.074283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.074310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.074336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.077979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.078016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.078042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.081685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.081713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.081724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.085446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.085475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.085500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.089214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.089241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.089252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.092956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.092984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.093010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.096717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.096745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.096772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.100470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.100499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.100509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.104166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.104191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.104201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.107860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.107890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.107900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.111638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.111668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.111678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.115349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.115378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.115389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.119069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.119096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.119122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.122803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.122832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.122858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.126583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.126612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.126638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.130359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.130388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.130398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.267 [2024-11-26 20:48:44.134119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.267 [2024-11-26 20:48:44.134149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.267 [2024-11-26 20:48:44.134170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.137898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.141685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.141714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.141724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.145425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.145465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.145476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.149189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.149216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.149227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.152914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.152943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.152954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.156692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.156721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.156733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.160421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.160450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.160461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.164268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.164297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.164308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.168067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.168099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.168110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.171804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.171834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.175631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.175660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.175671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.179605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.179637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.179649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.183477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.183509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.183521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.187428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.187460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.187472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.191363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.191395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.191407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.195201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.195230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.195240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.199027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.199057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.199068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.202905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.202936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.202947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.206757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.206787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.206798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.210591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.210621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.210631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.214293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.214321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.214332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.217983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.218011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.218022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.221734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.221763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.221773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.225562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.225590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.225601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.229401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.229430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.229440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.233309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.233338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.233348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.268 [2024-11-26 20:48:44.237017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.268 [2024-11-26 20:48:44.237046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.268 [2024-11-26 20:48:44.237072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.269 [2024-11-26 20:48:44.240735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.269 [2024-11-26 20:48:44.240763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.269 [2024-11-26 20:48:44.240790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.269 [2024-11-26 20:48:44.244510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.269 [2024-11-26 20:48:44.244539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.269 [2024-11-26 20:48:44.244550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.269 [2024-11-26 20:48:44.248240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.269 [2024-11-26 20:48:44.248268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.269 [2024-11-26 20:48:44.248279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.269 [2024-11-26 20:48:44.251953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.269 [2024-11-26 20:48:44.251982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.269 [2024-11-26 20:48:44.252008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.255752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.255781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.255791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.259496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.259525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.259552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.263281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.263317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.263328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.267081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.267110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.267120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.270872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.270900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.270910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.274633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.274663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.274673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.278393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.278424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.278434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.282091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.282119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.282145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.285806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.285835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.285861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.289553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.289582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.289592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.293239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.293266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.293292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.296969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.296998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.297024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.300732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.300759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.300769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.304591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.304624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.304637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.308493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.308522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.308545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.312295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.312324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.312335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.316038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.316067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.316078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.319751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.319781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.319791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.323449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.323479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.323489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.327143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.327181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.327192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.330857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.330886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.330897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.334617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.334646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.334657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.338351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.338384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.529 [2024-11-26 20:48:44.338395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.529 [2024-11-26 20:48:44.342137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.529 [2024-11-26 20:48:44.342191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.342202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.345862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.345891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.345902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.349592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.349621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.349632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.353305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.353333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.353359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.357028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.357058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.357084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.360819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.360848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.360874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.364603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.364632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.364642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.368287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.368316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.368327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.372075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.372104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.372115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.375845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.375876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.375888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.379684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.379714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.379725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.383514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.383544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.383554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.387289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.387328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.387338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.391079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.391108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.391118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.394954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.394983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.394994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.398757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.398788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.398799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.402663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.402696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.402706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.406521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.406554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.406565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.410398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.410428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.410439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.414188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.414218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.414229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.417969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.417999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.418010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.421780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.421811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.421823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.425683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.425714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.425725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.429436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.429467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.429478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.433264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.433293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.433304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.436971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.437001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.437012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.441004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.530 [2024-11-26 20:48:44.441033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.530 [2024-11-26 20:48:44.441059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.530 [2024-11-26 20:48:44.444785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.444814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.444841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.448495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.448524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.448535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.452214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.452242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.452253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.456018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.456048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.456059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.459751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.459781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.459792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.463493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.463522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.463532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.467186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.467213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.467224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.470998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.471027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.471038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.474768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.474796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.474807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.478477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.478505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.478516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.482270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.482298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.482308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.486039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.486067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.486078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.489827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.489855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.489882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.493622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.493651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.493661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.497450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.497479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.497505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.501159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.501196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.501222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.504880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.504910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.504920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.508727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.508755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.508782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.512429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.512459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.512469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.531 [2024-11-26 20:48:44.516183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.531 [2024-11-26 20:48:44.516212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.531 [2024-11-26 20:48:44.516223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.519968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.519998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.520008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.523695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.523724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.523734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.527597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.527626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.527636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.531290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.531327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.531338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.534951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.534980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.534991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.538892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.538922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.538933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.542729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.542758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.542769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.546521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.546549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.546560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.550203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.550230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.550256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.553924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.553952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.791 [2024-11-26 20:48:44.553962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.791 [2024-11-26 20:48:44.557649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.791 [2024-11-26 20:48:44.557677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.557688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.561423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.561451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.561462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.565134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.565174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.565185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.568941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.568970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.568981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.572831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.572860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.572871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.576793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.576825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.576836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.580712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.580742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.580753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.584509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.584538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.584549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.588196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.588224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.588235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.591943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.591973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.591983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.595644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.595673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.595684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.599357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.599385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.599395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.603112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.603141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.603168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.606866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.606894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.606905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.610619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.610649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.610675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.614354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.614382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.614392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.618354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.618381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.618391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.622239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.622392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.622406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.626289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.626319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.626330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.630284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.630318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.630329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.634046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.634075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.634086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.637941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.637971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.637982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.641813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.641842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.641852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.645621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.645651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.645661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.649524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.649553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.649563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.653307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.653335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.653346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.657144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.657182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.792 [2024-11-26 20:48:44.657193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.792 [2024-11-26 20:48:44.660945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.792 [2024-11-26 20:48:44.660975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.660986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.664840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.664869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.664880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.668625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.668654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.668664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.672320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.672349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.672360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.676008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.676037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.676047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.679722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.679761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.683515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.683545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.683555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.687334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.687362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.687373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.691066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.691094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.691105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.694933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.694961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.694971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.698757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.698787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.698797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.702583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.702612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.702622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.706462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.706491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.706501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.710279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.710307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.710318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.714065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.714093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.714104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.717921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.717951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.717961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.721775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.721803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.721814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.725587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.725616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.725626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.729404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.729433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.729444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.733181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.733208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.733219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.736970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.736999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.737010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.740860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.740889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.740900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.744811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.744840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.744850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.748700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.748729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.748739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.752510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.752539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.752550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.756264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.756293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.756304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.759972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.793 [2024-11-26 20:48:44.760001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.793 [2024-11-26 20:48:44.760027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.793 [2024-11-26 20:48:44.763826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.794 [2024-11-26 20:48:44.763854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.794 [2024-11-26 20:48:44.763864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.794 [2024-11-26 20:48:44.767520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.794 [2024-11-26 20:48:44.767548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.794 [2024-11-26 20:48:44.767559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.794 [2024-11-26 20:48:44.771254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.794 [2024-11-26 20:48:44.771282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.794 [2024-11-26 20:48:44.771293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.794 [2024-11-26 20:48:44.774989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.794 [2024-11-26 20:48:44.775017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.794 [2024-11-26 20:48:44.775027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.794 [2024-11-26 20:48:44.778846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:49.794 [2024-11-26 20:48:44.778875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.794 [2024-11-26 20:48:44.778886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:50.054 [2024-11-26 20:48:44.782659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.782689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.782700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:50.054 [2024-11-26 20:48:44.786550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.786581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.786592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:50.054 [2024-11-26 20:48:44.790315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.790344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.790354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:50.054 [2024-11-26 20:48:44.794110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.794139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.794149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:50.054 [2024-11-26 20:48:44.797906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.797935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.797946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:50.054 [2024-11-26 20:48:44.801784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.801824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:50.054 [2024-11-26 20:48:44.805748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.805779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.805790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:50.054 8137.50 IOPS, 1017.19 MiB/s [2024-11-26T20:48:45.047Z] [2024-11-26 20:48:44.810325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22339b0) 00:20:50.054 [2024-11-26 20:48:44.810357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.054 [2024-11-26 20:48:44.810369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:50.054 00:20:50.054 Latency(us) 00:20:50.054 [2024-11-26T20:48:45.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.054 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:50.054 nvme0n1 : 2.00 8134.71 1016.84 0.00 0.00 1963.92 1771.03 5274.09 00:20:50.054 [2024-11-26T20:48:45.047Z] =================================================================================================================== 00:20:50.054 [2024-11-26T20:48:45.047Z] Total : 8134.71 1016.84 0.00 0.00 1963.92 1771.03 5274.09 00:20:50.054 { 00:20:50.054 "results": [ 00:20:50.054 { 00:20:50.054 "job": "nvme0n1", 00:20:50.054 "core_mask": "0x2", 00:20:50.054 "workload": "randread", 00:20:50.054 "status": "finished", 00:20:50.054 "queue_depth": 16, 00:20:50.054 "io_size": 131072, 00:20:50.054 "runtime": 2.002654, 00:20:50.054 "iops": 8134.705246138375, 00:20:50.054 "mibps": 1016.8381557672968, 00:20:50.054 "io_failed": 0, 00:20:50.054 "io_timeout": 0, 00:20:50.054 "avg_latency_us": 1963.9186841697579, 00:20:50.054 "min_latency_us": 1771.032380952381, 00:20:50.054 "max_latency_us": 5274.087619047619 00:20:50.054 } 00:20:50.054 ], 00:20:50.054 "core_count": 1 00:20:50.054 } 00:20:50.054 20:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:50.054 20:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:50.054 20:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:50.054 | .driver_specific 00:20:50.054 | .nvme_error 00:20:50.054 | .status_code 00:20:50.054 | .command_transient_transport_error' 00:20:50.054 20:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:50.315 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 526 > 0 )) 00:20:50.315 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80922 00:20:50.315 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80922 ']' 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80922 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80922 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.316 killing process with pid 80922 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80922' 00:20:50.316 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.316 00:20:50.316 Latency(us) 00:20:50.316 [2024-11-26T20:48:45.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.316 [2024-11-26T20:48:45.309Z] =================================================================================================================== 00:20:50.316 [2024-11-26T20:48:45.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80922 00:20:50.316 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80922 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80975 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80975 /var/tmp/bperf.sock 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80975 ']' 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:50.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.574 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.574 [2024-11-26 20:48:45.380151] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:50.574 [2024-11-26 20:48:45.380264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80975 ] 00:20:50.574 [2024-11-26 20:48:45.532255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.833 [2024-11-26 20:48:45.584516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.833 [2024-11-26 20:48:45.627099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:50.833 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.833 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:50.833 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:50.834 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.093 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:51.093 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.093 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.093 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.093 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.093 20:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.351 nvme0n1 00:20:51.352 20:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:51.352 20:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.352 20:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.352 20:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.352 20:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:51.352 20:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:51.611 Running I/O for 2 seconds... 00:20:51.611 [2024-11-26 20:48:46.396756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb048 00:20:51.611 [2024-11-26 20:48:46.397924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.397955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.409384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb8b8 00:20:51.611 [2024-11-26 20:48:46.410518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.410547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.421908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc128 00:20:51.611 [2024-11-26 20:48:46.423019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.423048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.434477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc998 00:20:51.611 [2024-11-26 20:48:46.435576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.435604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.446995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efd208 00:20:51.611 [2024-11-26 20:48:46.448088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.448116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.459567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efda78 00:20:51.611 [2024-11-26 20:48:46.460633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.460660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.472024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efe2e8 00:20:51.611 [2024-11-26 20:48:46.473059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.473086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.484545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efeb58 00:20:51.611 [2024-11-26 20:48:46.485570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.485595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.502146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efef90 00:20:51.611 [2024-11-26 20:48:46.504164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.504192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.514818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efeb58 00:20:51.611 [2024-11-26 20:48:46.516814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.516839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.527406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efe2e8 00:20:51.611 [2024-11-26 20:48:46.529366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.529393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.540031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efda78 00:20:51.611 [2024-11-26 20:48:46.541973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.541998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.552738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efd208 00:20:51.611 [2024-11-26 20:48:46.554671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.554697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.565535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc998 00:20:51.611 [2024-11-26 20:48:46.567461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.567488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.578187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc128 00:20:51.611 [2024-11-26 20:48:46.580102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:51.611 [2024-11-26 20:48:46.590991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb8b8 00:20:51.611 [2024-11-26 20:48:46.592912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.611 [2024-11-26 20:48:46.592936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.603775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb048 00:20:51.870 [2024-11-26 20:48:46.605742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.605766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.616504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efa7d8 00:20:51.870 [2024-11-26 20:48:46.618359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.618385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.629389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef9f68 00:20:51.870 [2024-11-26 20:48:46.631232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.631258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.642730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef96f8 00:20:51.870 [2024-11-26 20:48:46.644747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.644771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.655779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef8e88 00:20:51.870 [2024-11-26 20:48:46.657622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.657648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.668484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef8618 00:20:51.870 [2024-11-26 20:48:46.670277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.670303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.681119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef7da8 00:20:51.870 [2024-11-26 20:48:46.682890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.682915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.693586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef7538 00:20:51.870 [2024-11-26 20:48:46.695362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.695385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.706099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef6cc8 00:20:51.870 [2024-11-26 20:48:46.707860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.707898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.718889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef6458 00:20:51.870 [2024-11-26 20:48:46.720753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.720778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.731837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef5be8 00:20:51.870 [2024-11-26 20:48:46.733664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.870 [2024-11-26 20:48:46.733689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:51.870 [2024-11-26 20:48:46.744942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef5378 00:20:51.870 [2024-11-26 20:48:46.746674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.746706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.757970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef4b08 00:20:51.871 [2024-11-26 20:48:46.759752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.759800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.770910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef4298 00:20:51.871 [2024-11-26 20:48:46.772743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.772778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.783917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef3a28 00:20:51.871 [2024-11-26 20:48:46.785627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.785658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.796610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef31b8 00:20:51.871 [2024-11-26 20:48:46.798281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.798309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.809272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef2948 00:20:51.871 [2024-11-26 20:48:46.810903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.810931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.821946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef20d8 00:20:51.871 [2024-11-26 20:48:46.823681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.823707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.834713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef1868 00:20:51.871 [2024-11-26 20:48:46.836340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.836383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.847377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef0ff8 00:20:51.871 [2024-11-26 20:48:46.848944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.871 [2024-11-26 20:48:46.848973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:51.871 [2024-11-26 20:48:46.860066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef0788 00:20:52.130 [2024-11-26 20:48:46.861626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.130 [2024-11-26 20:48:46.861653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:52.130 [2024-11-26 20:48:46.872715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeff18 00:20:52.130 [2024-11-26 20:48:46.874283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.130 [2024-11-26 20:48:46.874321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:52.130 [2024-11-26 20:48:46.885288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eef6a8 00:20:52.130 [2024-11-26 20:48:46.886824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.130 [2024-11-26 20:48:46.886849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:52.130 [2024-11-26 20:48:46.898116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeee38 00:20:52.130 [2024-11-26 20:48:46.899645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.130 [2024-11-26 20:48:46.899671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:52.130 [2024-11-26 20:48:46.910648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eee5c8 00:20:52.130 [2024-11-26 20:48:46.912176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.130 [2024-11-26 20:48:46.912202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:52.130 [2024-11-26 20:48:46.923321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eedd58 00:20:52.131 [2024-11-26 20:48:46.924795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:46.924821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:46.936049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eed4e8 00:20:52.131 [2024-11-26 20:48:46.937517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:46.937543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:46.948737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eecc78 00:20:52.131 [2024-11-26 20:48:46.950192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:46.950218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:46.961418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eec408 00:20:52.131 [2024-11-26 20:48:46.962856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:46.962885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:46.974106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eebb98 00:20:52.131 [2024-11-26 20:48:46.975579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:46.975614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:46.986769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeb328 00:20:52.131 [2024-11-26 20:48:46.988211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:46.988244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:46.999419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeaab8 00:20:52.131 [2024-11-26 20:48:47.000810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.000838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.012053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eea248 00:20:52.131 [2024-11-26 20:48:47.013430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.013457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.024679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee99d8 00:20:52.131 [2024-11-26 20:48:47.026052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.026080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.037276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee9168 00:20:52.131 [2024-11-26 20:48:47.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.038673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.049890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee88f8 00:20:52.131 [2024-11-26 20:48:47.051233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.051258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.062430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee8088 00:20:52.131 [2024-11-26 20:48:47.063760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.063787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.075057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee7818 00:20:52.131 [2024-11-26 20:48:47.076469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.076497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.087885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee6fa8 00:20:52.131 [2024-11-26 20:48:47.089171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.089196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.100512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee6738 00:20:52.131 [2024-11-26 20:48:47.101777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.101803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:52.131 [2024-11-26 20:48:47.113104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee5ec8 00:20:52.131 [2024-11-26 20:48:47.114358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.131 [2024-11-26 20:48:47.114384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:52.390 [2024-11-26 20:48:47.125818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee5658 00:20:52.390 [2024-11-26 20:48:47.127044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.390 [2024-11-26 20:48:47.127070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:52.390 [2024-11-26 20:48:47.138346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee4de8 00:20:52.390 [2024-11-26 20:48:47.139572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.390 [2024-11-26 20:48:47.139598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:52.390 [2024-11-26 20:48:47.150918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee4578 00:20:52.390 [2024-11-26 20:48:47.152227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.390 [2024-11-26 20:48:47.152255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:52.390 [2024-11-26 20:48:47.163666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee3d08 00:20:52.390 [2024-11-26 20:48:47.164851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.390 [2024-11-26 20:48:47.164878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:52.390 [2024-11-26 20:48:47.176261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee3498 00:20:52.390 [2024-11-26 20:48:47.177433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.390 [2024-11-26 20:48:47.177459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.188996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee2c28 00:20:52.391 [2024-11-26 20:48:47.190148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.190189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.201954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee23b8 00:20:52.391 [2024-11-26 20:48:47.203095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.203123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.214607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee1b48 00:20:52.391 [2024-11-26 20:48:47.215766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.215794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.227248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee12d8 00:20:52.391 [2024-11-26 20:48:47.228376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.228403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.239780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee0a68 00:20:52.391 [2024-11-26 20:48:47.240870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.240896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.252461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee01f8 00:20:52.391 [2024-11-26 20:48:47.253610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.253645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.265287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016edf988 00:20:52.391 [2024-11-26 20:48:47.266375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.266411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.277948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016edf118 00:20:52.391 [2024-11-26 20:48:47.279010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.279040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.290494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ede8a8 00:20:52.391 [2024-11-26 20:48:47.291552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.291580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.303000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ede038 00:20:52.391 [2024-11-26 20:48:47.304045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.304073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.320867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ede038 00:20:52.391 [2024-11-26 20:48:47.322848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.322874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.333410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ede8a8 00:20:52.391 [2024-11-26 20:48:47.335384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.335411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.346043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016edf118 00:20:52.391 [2024-11-26 20:48:47.348015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.348040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.358648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016edf988 00:20:52.391 [2024-11-26 20:48:47.360597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.360621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:52.391 [2024-11-26 20:48:47.371237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee01f8 00:20:52.391 [2024-11-26 20:48:47.373190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.391 [2024-11-26 20:48:47.373215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.384004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee0a68 00:20:52.650 [2024-11-26 20:48:47.386239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.386265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:52.650 19863.00 IOPS, 77.59 MiB/s [2024-11-26T20:48:47.643Z] [2024-11-26 20:48:47.397018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee12d8 00:20:52.650 [2024-11-26 20:48:47.398922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.398950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.409659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee1b48 00:20:52.650 [2024-11-26 20:48:47.411558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.411584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.422243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee23b8 00:20:52.650 [2024-11-26 20:48:47.424104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.424130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.434815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee2c28 00:20:52.650 [2024-11-26 20:48:47.436673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.436699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.447527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee3498 00:20:52.650 [2024-11-26 20:48:47.449361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.449385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.460294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee3d08 00:20:52.650 [2024-11-26 20:48:47.462101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.462126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.472950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee4578 00:20:52.650 [2024-11-26 20:48:47.474753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.650 [2024-11-26 20:48:47.474779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:52.650 [2024-11-26 20:48:47.485599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee4de8 00:20:52.651 [2024-11-26 20:48:47.487393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.487420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.500118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee5658 00:20:52.651 [2024-11-26 20:48:47.502273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.502309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.514832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee5ec8 00:20:52.651 [2024-11-26 20:48:47.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.516943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.528271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee6738 00:20:52.651 [2024-11-26 20:48:47.530034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.530061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.541024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee6fa8 00:20:52.651 [2024-11-26 20:48:47.542763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.542788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.553620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee7818 00:20:52.651 [2024-11-26 20:48:47.555337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.555363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.566240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee8088 00:20:52.651 [2024-11-26 20:48:47.567948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.567974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.578802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee88f8 00:20:52.651 [2024-11-26 20:48:47.580492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.580518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.591400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee9168 00:20:52.651 [2024-11-26 20:48:47.593051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.593078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.603998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ee99d8 00:20:52.651 [2024-11-26 20:48:47.605669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.605700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.616690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eea248 00:20:52.651 [2024-11-26 20:48:47.618352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.618383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.651 [2024-11-26 20:48:47.629355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeaab8 00:20:52.651 [2024-11-26 20:48:47.630959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.651 [2024-11-26 20:48:47.630986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.641950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeb328 00:20:52.911 [2024-11-26 20:48:47.643700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.643727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.654928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eebb98 00:20:52.911 [2024-11-26 20:48:47.656626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.656651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.667939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eec408 00:20:52.911 [2024-11-26 20:48:47.669700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.669729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.680886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eecc78 00:20:52.911 [2024-11-26 20:48:47.682446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.682473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.693690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eed4e8 00:20:52.911 [2024-11-26 20:48:47.695231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.695258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.707498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eedd58 00:20:52.911 [2024-11-26 20:48:47.709105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.709134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.720533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eee5c8 00:20:52.911 [2024-11-26 20:48:47.722035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.722061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.733351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeee38 00:20:52.911 [2024-11-26 20:48:47.734833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.734859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.745896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eef6a8 00:20:52.911 [2024-11-26 20:48:47.747397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.747423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.758515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016eeff18 00:20:52.911 [2024-11-26 20:48:47.759988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.760014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.771131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef0788 00:20:52.911 [2024-11-26 20:48:47.772604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.772632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.783781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef0ff8 00:20:52.911 [2024-11-26 20:48:47.785208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.785233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.796444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef1868 00:20:52.911 [2024-11-26 20:48:47.797852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.797878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:52.911 [2024-11-26 20:48:47.809162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef20d8 00:20:52.911 [2024-11-26 20:48:47.810586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.911 [2024-11-26 20:48:47.810620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:52.912 [2024-11-26 20:48:47.821870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef2948 00:20:52.912 [2024-11-26 20:48:47.823268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.912 [2024-11-26 20:48:47.823308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.912 [2024-11-26 20:48:47.834580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef31b8 00:20:52.912 [2024-11-26 20:48:47.835950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.912 [2024-11-26 20:48:47.835977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:52.912 [2024-11-26 20:48:47.847226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef3a28 00:20:52.912 [2024-11-26 20:48:47.848599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.912 [2024-11-26 20:48:47.848626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:52.912 [2024-11-26 20:48:47.859812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef4298 00:20:52.912 [2024-11-26 20:48:47.861142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.912 [2024-11-26 20:48:47.861173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:52.912 [2024-11-26 20:48:47.872556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef4b08 00:20:52.912 [2024-11-26 20:48:47.873907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.912 [2024-11-26 20:48:47.873939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:52.912 [2024-11-26 20:48:47.885358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef5378 00:20:52.912 [2024-11-26 20:48:47.886691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.912 [2024-11-26 20:48:47.886724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:52.912 [2024-11-26 20:48:47.898217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef5be8 00:20:52.912 [2024-11-26 20:48:47.899626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.912 [2024-11-26 20:48:47.899656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:47.911500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef6458 00:20:53.171 [2024-11-26 20:48:47.912833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:47.912860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:47.924338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef6cc8 00:20:53.171 [2024-11-26 20:48:47.925596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:47.925623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:47.937031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef7538 00:20:53.171 [2024-11-26 20:48:47.938312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:47.938339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:47.950093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef7da8 00:20:53.171 [2024-11-26 20:48:47.951353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:47.951382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:47.963000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef8618 00:20:53.171 [2024-11-26 20:48:47.964326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:47.964355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:47.975954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef8e88 00:20:53.171 [2024-11-26 20:48:47.977204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:47.977233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:47.988953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef96f8 00:20:53.171 [2024-11-26 20:48:47.990136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:47.990175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.001748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef9f68 00:20:53.171 [2024-11-26 20:48:48.002945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.002979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.014555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efa7d8 00:20:53.171 [2024-11-26 20:48:48.015747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.015782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.027754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb048 00:20:53.171 [2024-11-26 20:48:48.029002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.029032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.040732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb8b8 00:20:53.171 [2024-11-26 20:48:48.041866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.041893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.053379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc128 00:20:53.171 [2024-11-26 20:48:48.054496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.054523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.066091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc998 00:20:53.171 [2024-11-26 20:48:48.067181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.067208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.078739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efd208 00:20:53.171 [2024-11-26 20:48:48.079864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.079892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.091499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efda78 00:20:53.171 [2024-11-26 20:48:48.092561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.092587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.104039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efe2e8 00:20:53.171 [2024-11-26 20:48:48.105076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.105102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.116638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efeb58 00:20:53.171 [2024-11-26 20:48:48.117665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.117692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.134456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efef90 00:20:53.171 [2024-11-26 20:48:48.136488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.136514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.147050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efeb58 00:20:53.171 [2024-11-26 20:48:48.149058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.171 [2024-11-26 20:48:48.149083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:53.171 [2024-11-26 20:48:48.159683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efe2e8 00:20:53.430 [2024-11-26 20:48:48.161644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.430 [2024-11-26 20:48:48.161668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:53.430 [2024-11-26 20:48:48.172439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efda78 00:20:53.430 [2024-11-26 20:48:48.174380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.430 [2024-11-26 20:48:48.174405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:53.430 [2024-11-26 20:48:48.185082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efd208 00:20:53.430 [2024-11-26 20:48:48.187015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.430 [2024-11-26 20:48:48.187040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:53.430 [2024-11-26 20:48:48.197638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc998 00:20:53.430 [2024-11-26 20:48:48.199584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.430 [2024-11-26 20:48:48.199610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:53.430 [2024-11-26 20:48:48.210248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efc128 00:20:53.430 [2024-11-26 20:48:48.212149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.430 [2024-11-26 20:48:48.212182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:53.430 [2024-11-26 20:48:48.222866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb8b8 00:20:53.430 [2024-11-26 20:48:48.224767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.430 [2024-11-26 20:48:48.224790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:53.430 [2024-11-26 20:48:48.235480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efb048 00:20:53.430 [2024-11-26 20:48:48.237347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.430 [2024-11-26 20:48:48.237371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.430 [2024-11-26 20:48:48.248100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efa7d8 00:20:53.431 [2024-11-26 20:48:48.249959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.249983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.260657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef9f68 00:20:53.431 [2024-11-26 20:48:48.262497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.262520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.273304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef96f8 00:20:53.431 [2024-11-26 20:48:48.275111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.275136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.285907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef8e88 00:20:53.431 [2024-11-26 20:48:48.287734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.287762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.298496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef8618 00:20:53.431 [2024-11-26 20:48:48.300299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.300327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.311114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef7da8 00:20:53.431 [2024-11-26 20:48:48.312925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.312953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.323818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef7538 00:20:53.431 [2024-11-26 20:48:48.325583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.325619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.336628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef6cc8 00:20:53.431 [2024-11-26 20:48:48.338438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.338463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.349506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef6458 00:20:53.431 [2024-11-26 20:48:48.351241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.351268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.362034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef5be8 00:20:53.431 [2024-11-26 20:48:48.363770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.363796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:53.431 [2024-11-26 20:48:48.374619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016ef5378 00:20:53.431 [2024-11-26 20:48:48.376339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.376366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:53.431 19862.00 IOPS, 77.59 MiB/s [2024-11-26T20:48:48.424Z] [2024-11-26 20:48:48.386664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69cae0) with pdu=0x200016efeb58 00:20:53.431 [2024-11-26 20:48:48.386935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.431 [2024-11-26 20:48:48.386956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:53.431 00:20:53.431 Latency(us) 00:20:53.431 [2024-11-26T20:48:48.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.431 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:53.431 nvme0n1 : 2.01 19875.63 77.64 0.00 0.00 6429.36 2402.99 24092.28 00:20:53.431 [2024-11-26T20:48:48.424Z] =================================================================================================================== 00:20:53.431 [2024-11-26T20:48:48.424Z] Total : 19875.63 77.64 0.00 0.00 6429.36 2402.99 24092.28 00:20:53.431 { 00:20:53.431 "results": [ 00:20:53.431 { 00:20:53.431 "job": "nvme0n1", 00:20:53.431 "core_mask": "0x2", 00:20:53.431 "workload": "randwrite", 00:20:53.431 "status": "finished", 00:20:53.431 "queue_depth": 128, 00:20:53.431 "io_size": 4096, 00:20:53.431 "runtime": 2.006125, 00:20:53.431 "iops": 19875.630880428685, 00:20:53.431 "mibps": 77.63918312667455, 00:20:53.431 "io_failed": 0, 00:20:53.431 "io_timeout": 0, 00:20:53.431 "avg_latency_us": 6429.36181347206, 00:20:53.431 "min_latency_us": 2402.9866666666667, 00:20:53.431 "max_latency_us": 24092.281904761905 00:20:53.431 } 00:20:53.431 ], 00:20:53.431 "core_count": 1 00:20:53.431 } 00:20:53.431 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:53.431 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:53.431 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:53.431 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:53.431 | .driver_specific 00:20:53.431 | .nvme_error 00:20:53.431 | .status_code 00:20:53.431 | .command_transient_transport_error' 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80975 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80975 ']' 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80975 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80975 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:53.690 killing process with pid 80975 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80975' 00:20:53.690 Received shutdown signal, test time was about 2.000000 seconds 00:20:53.690 00:20:53.690 Latency(us) 00:20:53.690 [2024-11-26T20:48:48.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.690 [2024-11-26T20:48:48.683Z] =================================================================================================================== 00:20:53.690 [2024-11-26T20:48:48.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80975 00:20:53.690 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80975 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81026 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81026 /var/tmp/bperf.sock 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81026 ']' 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.983 20:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:53.983 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:53.983 Zero copy mechanism will not be used. 00:20:53.983 [2024-11-26 20:48:48.887648] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:53.984 [2024-11-26 20:48:48.887751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81026 ] 00:20:54.242 [2024-11-26 20:48:49.035788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.242 [2024-11-26 20:48:49.084736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.242 [2024-11-26 20:48:49.126664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:54.242 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.242 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:54.242 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:54.242 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:54.500 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:54.500 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.500 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.758 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.758 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:54.758 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:55.017 nvme0n1 00:20:55.018 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:55.018 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.018 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:55.018 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.018 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:55.018 20:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:55.018 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:55.018 Zero copy mechanism will not be used. 00:20:55.018 Running I/O for 2 seconds... 00:20:55.018 [2024-11-26 20:48:49.886103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.886289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.886319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.890552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.890707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.890729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.894409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.894552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.894580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.898234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.898390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.898416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.902057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.902204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.902225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.905858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.906047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.906066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.909582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.909723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.909744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.913335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.913478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.913497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.916782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.916943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.916963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.920266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.920317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.920338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.923837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.923890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.923911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.927476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.927529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.927550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.931046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.931097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.931117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.934626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.934677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.934697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.938223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.938290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.938323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.941815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.941898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.941919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.945436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.945611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.949252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.949398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.949419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.952695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.952869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.018 [2024-11-26 20:48:49.952891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.018 [2024-11-26 20:48:49.956211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.018 [2024-11-26 20:48:49.956264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.956284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.959978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.960029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.960050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.963707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.963759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.963779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.967473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.967527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.967547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.971261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.971344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.971364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.974908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.974960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.974980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.978577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.978629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.978649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.982255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.982310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.982331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.985978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.986119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.986140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.989676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.989752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.989772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.993347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.993409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.993430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:49.997074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:49.997200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:49.997221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:50.000875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:50.000962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:50.000982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.019 [2024-11-26 20:48:50.005159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.019 [2024-11-26 20:48:50.005301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.019 [2024-11-26 20:48:50.005323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.009250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.009418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.009442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.013381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.013578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.017475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.017626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.017645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.021595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.021754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.021776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.025659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.025825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.025847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.029666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.029817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.029837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.033535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.033662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.033682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.037417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.037556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.037580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.041217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.041375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.041398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.044964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.045111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.045131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.048975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.049140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.049159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.052973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.053131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.053152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.056895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.057053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.057074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.060926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.061094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.061117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.064805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.064967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.064987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.068709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.068867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.068887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.072648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.072800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.280 [2024-11-26 20:48:50.072821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.280 [2024-11-26 20:48:50.076502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.280 [2024-11-26 20:48:50.076639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.076659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.080296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.080454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.080474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.083680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.083859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.083879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.087180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.087229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.087249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.090828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.090881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.090901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.094537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.094597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.094617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.098218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.098289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.098309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.101952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.102027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.102047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.105819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.105871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.105907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.109569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.109622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.109642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.113339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.113392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.113412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.117044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.117095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.117115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.120823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.120879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.120899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.124579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.124649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.124668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.128405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.128486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.128506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.131991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.132042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.132062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.135578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.135649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.135670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.139158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.139260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.139280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.142779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.142967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.142987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.146541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.146713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.146732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.281 [2024-11-26 20:48:50.150294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.281 [2024-11-26 20:48:50.150445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.281 [2024-11-26 20:48:50.150465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.153958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.154114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.154132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.157766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.157928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.157947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.161180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.161367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.161387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.164640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.164718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.164738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.168326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.168405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.168426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.172019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.172069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.172089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.175624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.175673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.175694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.179348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.179448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.179470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.183004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.183106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.183126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.186771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.186837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.186857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.190519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.190719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.190739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.194387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.194544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.194564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.198272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.198452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.198472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.202131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.202272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.202292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.205907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.206069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.206088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.209729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.209871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.209892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.213563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.213749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.213768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.217314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.217467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.217486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.221099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.221221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.221242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.224863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.224997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.225020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.228692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.282 [2024-11-26 20:48:50.228841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.282 [2024-11-26 20:48:50.228861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.282 [2024-11-26 20:48:50.232492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.232645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.232665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.236334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.236458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.236478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.240133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.240293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.240312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.243905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.244050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.244070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.247727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.247884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.247904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.251511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.251667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.251687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.255288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.255453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.255472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.259075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.259258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.259278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.262521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.262716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.262735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.283 [2024-11-26 20:48:50.266042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.283 [2024-11-26 20:48:50.266096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.283 [2024-11-26 20:48:50.266118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.269906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.269957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.269978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.273698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.273749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.273771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.277457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.277525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.277546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.281178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.281231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.281251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.284793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.284866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.284886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.288386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.288459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.288479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.292009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.292190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.292211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.295772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.295928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.295947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.299589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.299765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.299787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.303403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.303530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.303550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.307146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.543 [2024-11-26 20:48:50.307311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.543 [2024-11-26 20:48:50.307332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.543 [2024-11-26 20:48:50.310901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.311068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.311087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.314751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.314935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.314955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.318502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.318644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.318664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.322242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.322383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.322403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.326013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.326138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.326158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.329867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.330033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.330053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.333643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.333811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.333830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.337459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.337631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.337650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.341322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.341477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.341497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.345234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.345390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.345410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.348991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.349157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.349178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.352642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.352843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.352864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.356476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.356531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.356552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.360052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.360111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.360132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.363652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.363703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.363723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.367260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.367335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.367375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.370905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.370962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.370982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.374469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.374562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.374582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.378075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.378159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.378192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.381915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.382076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.382097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.385450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.385613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.385634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.388947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.389016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.389037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.392555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.392636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.392656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.396301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.396354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.396373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.400016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.400068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.400087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.403656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.403715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.544 [2024-11-26 20:48:50.403735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.544 [2024-11-26 20:48:50.407240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.544 [2024-11-26 20:48:50.407312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.407332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.410952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.411033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.411054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.414551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.414633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.414653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.418190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.418381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.418401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.421963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.422115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.422135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.425761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.425909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.425929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.429550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.429707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.429727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.433386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.433513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.433534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.437133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.437338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.437356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.440911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.441082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.441101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.444348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.444509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.444529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.447829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.447879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.447898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.451311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.451383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.451403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.454858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.454909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.454929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.458429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.458492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.458511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.462035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.462137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.462157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.465635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.465718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.465738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.469267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.469345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.469365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.472875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.473075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.473096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.476677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.476845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.476865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.480470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.480609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.480629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.484303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.484459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.484479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.487816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.488020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.488040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.491440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.491496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.491517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.495214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.495277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.495306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.498921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.498974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.498994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.502616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.502683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.502703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.545 [2024-11-26 20:48:50.506225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.545 [2024-11-26 20:48:50.506278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.545 [2024-11-26 20:48:50.506298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.546 [2024-11-26 20:48:50.509791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.546 [2024-11-26 20:48:50.509856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.546 [2024-11-26 20:48:50.509876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.546 [2024-11-26 20:48:50.513399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.546 [2024-11-26 20:48:50.513481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.546 [2024-11-26 20:48:50.513501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.546 [2024-11-26 20:48:50.517011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.546 [2024-11-26 20:48:50.517216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.546 [2024-11-26 20:48:50.517236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.546 [2024-11-26 20:48:50.520785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.546 [2024-11-26 20:48:50.520937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.546 [2024-11-26 20:48:50.520958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.546 [2024-11-26 20:48:50.524583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.546 [2024-11-26 20:48:50.524706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.546 [2024-11-26 20:48:50.524726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.546 [2024-11-26 20:48:50.528399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.546 [2024-11-26 20:48:50.528554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.546 [2024-11-26 20:48:50.528574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.546 [2024-11-26 20:48:50.532151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.546 [2024-11-26 20:48:50.532316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.546 [2024-11-26 20:48:50.532336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.535940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.536094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.536113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.539458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.539630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.539651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.542984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.543032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.543052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.546540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.546594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.546614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.550128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.550191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.550212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.553794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.553844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.553864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.557396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.557458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.557477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.561033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.561169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.564688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.564808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.564828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.806 [2024-11-26 20:48:50.568363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.806 [2024-11-26 20:48:50.568502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.806 [2024-11-26 20:48:50.568523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.571753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.571923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.571943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.575177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.575225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.575245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.578884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.578937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.578958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.582594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.582642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.582662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.586260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.586312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.586332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.589875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.589957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.589977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.593533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.593624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.593645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.597242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.597396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.597417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.600938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.601079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.601099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.604348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.604507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.604527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.607822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.607876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.607896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.611426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.611485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.611505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.614987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.615035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.615055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.618572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.618625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.618644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.622193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.622248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.622267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.625753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.625829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.625848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.629389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.629445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.629481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.633027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.633213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.633233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.636842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.637002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.637022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.640664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.640807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.640826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.644471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.644615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.644636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.648241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.648398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.648418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.652036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.652194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.652214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.655825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.655962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.655982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.659225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.659405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.659425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.662713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.662768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.662788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.666296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.807 [2024-11-26 20:48:50.666349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.807 [2024-11-26 20:48:50.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.807 [2024-11-26 20:48:50.669809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.669866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.669885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.673454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.673504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.673524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.677005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.677061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.677081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.680735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.680801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.680820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.684498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.684571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.684591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.688241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.688324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.688344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.691908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.691991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.692011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.695710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.695911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.695933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.699280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.699497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.699518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.702799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.702868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.702888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.706390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.706457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.706477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.709966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.710016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.710036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.713541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.713599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.713618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.717134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.717232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.717251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.720781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.720838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.720858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.724407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.724485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.724505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.728000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.728179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.728199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.731784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.731922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.731943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.735536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.735663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.735683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.739259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.739394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.739414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.743167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.743333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.746943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.747095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.747114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.750816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.750976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.750996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.754404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.754568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.754588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.758037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.758090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.758111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.761821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.761883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.761903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.765587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.765638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.808 [2024-11-26 20:48:50.765658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.808 [2024-11-26 20:48:50.769255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.808 [2024-11-26 20:48:50.769313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.809 [2024-11-26 20:48:50.769333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.809 [2024-11-26 20:48:50.772981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.809 [2024-11-26 20:48:50.773034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.809 [2024-11-26 20:48:50.773054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.809 [2024-11-26 20:48:50.776847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.809 [2024-11-26 20:48:50.776915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.809 [2024-11-26 20:48:50.776935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:55.809 [2024-11-26 20:48:50.780674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.809 [2024-11-26 20:48:50.780721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.809 [2024-11-26 20:48:50.780741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:55.809 [2024-11-26 20:48:50.784505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.809 [2024-11-26 20:48:50.784617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.809 [2024-11-26 20:48:50.784637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:55.809 [2024-11-26 20:48:50.788346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.809 [2024-11-26 20:48:50.788402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.809 [2024-11-26 20:48:50.788423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:55.809 [2024-11-26 20:48:50.792212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:55.809 [2024-11-26 20:48:50.792273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.809 [2024-11-26 20:48:50.792293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.795980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.796038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.796059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.799793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.799853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.799890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.803708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.803759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.803780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.807516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.807593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.807613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.811156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.811240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.811260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.814765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.814844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.814864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.818401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.818579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.818600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.822197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.822374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.822393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.825977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.826141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.826172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.829849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.830002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.830021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.833685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.833826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.833847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.837489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.837632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.837653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.841460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.841614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.841634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.845038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.845218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.845238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.848629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.848695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.852569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.852620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.852640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.856320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.856375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.856395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.860156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.860222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.860244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.863936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.863990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.864010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.867836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.867895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.867916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.070 [2024-11-26 20:48:50.871914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.070 [2024-11-26 20:48:50.871975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.070 [2024-11-26 20:48:50.871997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.875990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.876045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.876068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.879963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.880026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.880048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.883915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 8306.00 IOPS, 1038.25 MiB/s [2024-11-26T20:48:51.064Z] [2024-11-26 20:48:50.885422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.885453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.888576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.888632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.888654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.892414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.892520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.892540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.896212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.896306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.896327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.899956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.900077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.900106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.903409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.903572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.903593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.906884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.906933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.906953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.910568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.910621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.910641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.914403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.914456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.914476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.918302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.918357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.918379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.922236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.922289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.922309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.926181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.926252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.926273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.929986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.930053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.930073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.933779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.933876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.933897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.937527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.937593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.937614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.941185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.941252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.941273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.944858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.944958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.944978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.948624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.948740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.948760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.952132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.952329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.952350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.955642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.955691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.955711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.959293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.959354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.959375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.962934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.963004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.963024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.966638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.966707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.966727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.970272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.970329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.970349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.071 [2024-11-26 20:48:50.973943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.071 [2024-11-26 20:48:50.974044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.071 [2024-11-26 20:48:50.974064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:50.977563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:50.977645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:50.977666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:50.981236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:50.981415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:50.981435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:50.984660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:50.984827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:50.984847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:50.988117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:50.988181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:50.988201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:50.991773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:50.991825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:50.991847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:50.995448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:50.995503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:50.995524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:50.999143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:50.999247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:50.999268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.002883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.002956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.002977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.006597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.006666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.006687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.010379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.010522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.010542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.014249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.014314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.014335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.018172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.018229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.018249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.021898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.022028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.022049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.025369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.025527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.025547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.028818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.028871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.028891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.032436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.032485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.032505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.035996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.036047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.036068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.039717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.039778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.039799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.043561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.043619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.043639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.047326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.047375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.047395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.051074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.051131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.051151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.054776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.054825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.054846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.072 [2024-11-26 20:48:51.058477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.072 [2024-11-26 20:48:51.058528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.072 [2024-11-26 20:48:51.058548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.062204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.062259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.062279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.065836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.065905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.065925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.069562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.069769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.069788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.073368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.073525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.073545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.077129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.077327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.077346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.080893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.081063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.081082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.084674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.084816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.084835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.088433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.088588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.088608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.092181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.092336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.092356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.332 [2024-11-26 20:48:51.095584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.332 [2024-11-26 20:48:51.095728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.332 [2024-11-26 20:48:51.095748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.098966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.099030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.099050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.102560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.102627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.102647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.106103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.106198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.106219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.109775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.109827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.109847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.113516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.113573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.113593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.117127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.117190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.117211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.120801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.120938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.120958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.124377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.124551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.124571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.128152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.128300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.128319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.131929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.132084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.132103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.135745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.135889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.135909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.139189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.139369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.139390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.142883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.143150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.143181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.146727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.147012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.147032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.150697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.151002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.151028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.154560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.154834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.154854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.158443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.158717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.158737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.162126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.162217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.162237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.165890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.165949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.165970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.169603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.169661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.169681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.173376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.173431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.173452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.177151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.177243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.177263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.180923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.180976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.180997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.184645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.184700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.184720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.188458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.188518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.188538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.192260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.192316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.192337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.196026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.333 [2024-11-26 20:48:51.196081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.333 [2024-11-26 20:48:51.196101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.333 [2024-11-26 20:48:51.199792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.199849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.199871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.203527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.203584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.203605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.207346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.207423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.207443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.211201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.211258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.211278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.214888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.214950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.214970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.218561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.218634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.218655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.222247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.222354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.222374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.225980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.226097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.226118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.229345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.229520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.229540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.232796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.232863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.232884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.236459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.236511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.236531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.240266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.240316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.240336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.243936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.243999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.244018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.247601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.247653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.247673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.251171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.251349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.251368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.254812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.254903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.254923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.258556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.258730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.258751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.262495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.262622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.262641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.266410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.266538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.266559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.270354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.270497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.270518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.273975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.274149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.274188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.277568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.277623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.277643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.281316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.281379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.281399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.284956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.285008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.285029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.288560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.288639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.288661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.292375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.292530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.292550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.295962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.296017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.334 [2024-11-26 20:48:51.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.334 [2024-11-26 20:48:51.299564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.334 [2024-11-26 20:48:51.299642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.335 [2024-11-26 20:48:51.299663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.335 [2024-11-26 20:48:51.303174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.335 [2024-11-26 20:48:51.303418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.335 [2024-11-26 20:48:51.303438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.335 [2024-11-26 20:48:51.306952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.335 [2024-11-26 20:48:51.307080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.335 [2024-11-26 20:48:51.307100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.335 [2024-11-26 20:48:51.310697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.335 [2024-11-26 20:48:51.310842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.335 [2024-11-26 20:48:51.310863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.335 [2024-11-26 20:48:51.314478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.335 [2024-11-26 20:48:51.314616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.335 [2024-11-26 20:48:51.314635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.335 [2024-11-26 20:48:51.318178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.335 [2024-11-26 20:48:51.318338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.335 [2024-11-26 20:48:51.318358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.322023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.322197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.322217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.326009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.326183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.326205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.329628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.329849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.329884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.333337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.333392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.333413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.337050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.337104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.337124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.340661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.340711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.340732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.344252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.344333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.344353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.347851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.594 [2024-11-26 20:48:51.347927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.594 [2024-11-26 20:48:51.347946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.594 [2024-11-26 20:48:51.351460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.351533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.351553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.355065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.355141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.355176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.358658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.358832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.358852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.362426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.362579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.362599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.366176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.366327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.366347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.369899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.370023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.370042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.373647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.373803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.373823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.377357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.377510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.377530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.381115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.381283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.381303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.384918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.385071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.385090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.388695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.388850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.388869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.392487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.392643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.392662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.396196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.396347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.396367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.400043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.400216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.400236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.403568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.403765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.403786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.407065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.407121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.407142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.410813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.410868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.410889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.414521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.414575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.414596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.418227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.418286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.418307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.421938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.422071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.422091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.425635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.425709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.425730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.595 [2024-11-26 20:48:51.429229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.595 [2024-11-26 20:48:51.429297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.595 [2024-11-26 20:48:51.429318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.432950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.433138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.433169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.436707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.436832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.436852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.440468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.440622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.440642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.444359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.444534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.444554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.448299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.448456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.448477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.452195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.452351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.452372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.456133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.456314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.456335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.460020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.460147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.460179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.463921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.464059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.464080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.467775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.467922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.467943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.471685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.471829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.471849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.475097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.475304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.475325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.478662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.478711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.478731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.482413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.482463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.482483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.485994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.486049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.486069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.489578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.489656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.489677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.493372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.493439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.493460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.497139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.497199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.497219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.501003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.501057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.501077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.505073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.505187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.596 [2024-11-26 20:48:51.505208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.596 [2024-11-26 20:48:51.508819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.596 [2024-11-26 20:48:51.508886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.508907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.512772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.512877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.512898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.516481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.516611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.516631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.520441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.520559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.520580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.524316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.524446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.524466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.528055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.528216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.528237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.531791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.531907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.531927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.535622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.535753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.535772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.538996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.539138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.539171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.542411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.542463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.542484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.546128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.546197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.546218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.549986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.550044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.550065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.553716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.553773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.553795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.557472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.557526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.557548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.561215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.561315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.561338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.565150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.565221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.565241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.568952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.569015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.569035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.572755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.572810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.572831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.576597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.576663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.576685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.597 [2024-11-26 20:48:51.580615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.597 [2024-11-26 20:48:51.580674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.597 [2024-11-26 20:48:51.580694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.858 [2024-11-26 20:48:51.584539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.858 [2024-11-26 20:48:51.584606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.858 [2024-11-26 20:48:51.584627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.858 [2024-11-26 20:48:51.588327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.858 [2024-11-26 20:48:51.588386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.858 [2024-11-26 20:48:51.588406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.858 [2024-11-26 20:48:51.591997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.858 [2024-11-26 20:48:51.592186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.858 [2024-11-26 20:48:51.592208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.858 [2024-11-26 20:48:51.595888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.858 [2024-11-26 20:48:51.596076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.858 [2024-11-26 20:48:51.596095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.858 [2024-11-26 20:48:51.599850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.858 [2024-11-26 20:48:51.599986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.858 [2024-11-26 20:48:51.600008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.858 [2024-11-26 20:48:51.603712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.858 [2024-11-26 20:48:51.603866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.858 [2024-11-26 20:48:51.603886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.858 [2024-11-26 20:48:51.607551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.858 [2024-11-26 20:48:51.607694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.607715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.611396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.611553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.611573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.615127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.615271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.615292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.618919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.619072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.619092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.622711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.622888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.622907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.626477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.626631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.626650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.630206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.630359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.630379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.633954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.634106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.634125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.637712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.637838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.637858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.641439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.641565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.641586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.645212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.645364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.645383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.648915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.649059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.649079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.652686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.652843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.652864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.656080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.656277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.656298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.659615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.659672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.659691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.663282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.663345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.663365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.666970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.667025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.667045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.670591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.670643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.670663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.674376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.674483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.674504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.678098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.678187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.678208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.681838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.681891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.685591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.685750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.685770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.689075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.689259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.689279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.692600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.692654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.692674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.696218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.696278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.696299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.699863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.699917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.699936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.703608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.859 [2024-11-26 20:48:51.703660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.859 [2024-11-26 20:48:51.703680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.859 [2024-11-26 20:48:51.707281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.707350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.707371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.710942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.711024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.711044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.714594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.714659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.714679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.718308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.718473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.718493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.721782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.721951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.721971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.725485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.725766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.725792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.729301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.729572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.733122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.733403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.733423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.736923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.737206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.737226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.740694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.740961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.740980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.744506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.744771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.744792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.748271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.748530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.748549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.752090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.752369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.752395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.755974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.756278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.756300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.760068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.760358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.760379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.763748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.763817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.763838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.767642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.767695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.767715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.771456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.771510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.771531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.775372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.775447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.775468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.779214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.779317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.779339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.783079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.783139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.783172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.786858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.786924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.786944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.790590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.790763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.790783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.794506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.794660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.794680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.798387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.798530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.798549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.802224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.802402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.802422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.805708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.805865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.860 [2024-11-26 20:48:51.805884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.860 [2024-11-26 20:48:51.809103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.860 [2024-11-26 20:48:51.809168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.809188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.812660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.812713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.812733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.816326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.816378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.816398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.819901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.819951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.819972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.823660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.823720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.823742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.827452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.827520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.827540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.831142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.831205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.831226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.834886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.834966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.834986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.838588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.838639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.838659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.842300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.842354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.842374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.861 [2024-11-26 20:48:51.846102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:56.861 [2024-11-26 20:48:51.846172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.861 [2024-11-26 20:48:51.846193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:57.120 [2024-11-26 20:48:51.849983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.120 [2024-11-26 20:48:51.850035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.120 [2024-11-26 20:48:51.850055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:57.120 [2024-11-26 20:48:51.853612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.120 [2024-11-26 20:48:51.853715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.120 [2024-11-26 20:48:51.853736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:57.120 [2024-11-26 20:48:51.857324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.120 [2024-11-26 20:48:51.857408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.120 [2024-11-26 20:48:51.857429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:57.120 [2024-11-26 20:48:51.860900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.120 [2024-11-26 20:48:51.860952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.121 [2024-11-26 20:48:51.860972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:57.121 [2024-11-26 20:48:51.864510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.121 [2024-11-26 20:48:51.864668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.121 [2024-11-26 20:48:51.864688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:57.121 [2024-11-26 20:48:51.868244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.121 [2024-11-26 20:48:51.868382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.121 [2024-11-26 20:48:51.868402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:57.121 [2024-11-26 20:48:51.871995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.121 [2024-11-26 20:48:51.872126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.121 [2024-11-26 20:48:51.872145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:57.121 [2024-11-26 20:48:51.875731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.121 [2024-11-26 20:48:51.875888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.121 [2024-11-26 20:48:51.875908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:57.121 [2024-11-26 20:48:51.879481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.121 [2024-11-26 20:48:51.879634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.121 [2024-11-26 20:48:51.879654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:57.121 [2024-11-26 20:48:51.883223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6895b0) with pdu=0x200016eff3c8 00:20:57.121 [2024-11-26 20:48:51.883359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.121 [2024-11-26 20:48:51.883379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:57.121 8311.00 IOPS, 1038.88 MiB/s 00:20:57.121 Latency(us) 00:20:57.121 [2024-11-26T20:48:52.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.121 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:57.121 nvme0n1 : 2.00 8306.96 1038.37 0.00 0.00 1922.47 1178.09 5180.46 00:20:57.121 [2024-11-26T20:48:52.114Z] =================================================================================================================== 00:20:57.121 [2024-11-26T20:48:52.114Z] Total : 8306.96 1038.37 0.00 0.00 1922.47 1178.09 5180.46 00:20:57.121 { 00:20:57.121 "results": [ 00:20:57.121 { 00:20:57.121 "job": "nvme0n1", 00:20:57.121 "core_mask": "0x2", 00:20:57.121 "workload": "randwrite", 00:20:57.121 "status": "finished", 00:20:57.121 "queue_depth": 16, 00:20:57.121 "io_size": 131072, 00:20:57.121 "runtime": 2.003742, 00:20:57.121 "iops": 8306.957682176648, 00:20:57.121 "mibps": 1038.369710272081, 00:20:57.121 "io_failed": 0, 00:20:57.121 "io_timeout": 0, 00:20:57.121 "avg_latency_us": 1922.4713599679583, 00:20:57.121 "min_latency_us": 1178.087619047619, 00:20:57.121 "max_latency_us": 5180.464761904762 00:20:57.121 } 00:20:57.121 ], 00:20:57.121 "core_count": 1 00:20:57.121 } 00:20:57.121 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:57.121 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:57.121 | .driver_specific 00:20:57.121 | .nvme_error 00:20:57.121 | .status_code 00:20:57.121 | .command_transient_transport_error' 00:20:57.121 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:57.121 20:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 537 > 0 )) 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81026 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81026 ']' 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81026 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81026 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.379 killing process with pid 81026 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81026' 00:20:57.379 Received shutdown signal, test time was about 2.000000 seconds 00:20:57.379 00:20:57.379 Latency(us) 00:20:57.379 [2024-11-26T20:48:52.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.379 [2024-11-26T20:48:52.372Z] =================================================================================================================== 00:20:57.379 [2024-11-26T20:48:52.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81026 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81026 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80850 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80850 ']' 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80850 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.379 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80850 00:20:57.637 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.637 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.637 killing process with pid 80850 00:20:57.637 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80850' 00:20:57.637 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80850 00:20:57.637 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80850 00:20:57.637 00:20:57.637 real 0m15.210s 00:20:57.637 user 0m28.542s 00:20:57.637 sys 0m5.466s 00:20:57.637 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.637 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:57.637 ************************************ 00:20:57.637 END TEST nvmf_digest_error 00:20:57.637 ************************************ 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.895 rmmod nvme_tcp 00:20:57.895 rmmod nvme_fabrics 00:20:57.895 rmmod nvme_keyring 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80850 ']' 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80850 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80850 ']' 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80850 00:20:57.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80850) - No such process 00:20:57.895 Process with pid 80850 is not found 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80850 is not found' 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:57.895 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:58.153 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:58.153 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.153 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.153 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:58.153 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.153 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.153 20:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:58.153 00:20:58.153 real 0m34.656s 00:20:58.153 user 1m3.900s 00:20:58.153 sys 0m11.765s 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:58.153 ************************************ 00:20:58.153 END TEST nvmf_digest 00:20:58.153 ************************************ 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.153 ************************************ 00:20:58.153 START TEST nvmf_host_multipath 00:20:58.153 ************************************ 00:20:58.153 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:58.412 * Looking for test storage... 00:20:58.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.412 --rc genhtml_branch_coverage=1 00:20:58.412 --rc genhtml_function_coverage=1 00:20:58.412 --rc genhtml_legend=1 00:20:58.412 --rc geninfo_all_blocks=1 00:20:58.412 --rc geninfo_unexecuted_blocks=1 00:20:58.412 00:20:58.412 ' 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.412 --rc genhtml_branch_coverage=1 00:20:58.412 --rc genhtml_function_coverage=1 00:20:58.412 --rc genhtml_legend=1 00:20:58.412 --rc geninfo_all_blocks=1 00:20:58.412 --rc geninfo_unexecuted_blocks=1 00:20:58.412 00:20:58.412 ' 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.412 --rc genhtml_branch_coverage=1 00:20:58.412 --rc genhtml_function_coverage=1 00:20:58.412 --rc genhtml_legend=1 00:20:58.412 --rc geninfo_all_blocks=1 00:20:58.412 --rc geninfo_unexecuted_blocks=1 00:20:58.412 00:20:58.412 ' 00:20:58.412 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.412 --rc genhtml_branch_coverage=1 00:20:58.412 --rc genhtml_function_coverage=1 00:20:58.412 --rc genhtml_legend=1 00:20:58.412 --rc geninfo_all_blocks=1 00:20:58.412 --rc geninfo_unexecuted_blocks=1 00:20:58.412 00:20:58.412 ' 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.413 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:58.413 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:58.414 Cannot find device "nvmf_init_br" 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:58.414 Cannot find device "nvmf_init_br2" 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:58.414 Cannot find device "nvmf_tgt_br" 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.414 Cannot find device "nvmf_tgt_br2" 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:58.414 Cannot find device "nvmf_init_br" 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:58.414 Cannot find device "nvmf_init_br2" 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:58.414 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:58.672 Cannot find device "nvmf_tgt_br" 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:58.673 Cannot find device "nvmf_tgt_br2" 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:58.673 Cannot find device "nvmf_br" 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:58.673 Cannot find device "nvmf_init_if" 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:58.673 Cannot find device "nvmf_init_if2" 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:58.673 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:58.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:58.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:58.931 00:20:58.931 --- 10.0.0.3 ping statistics --- 00:20:58.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.931 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:58.931 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:58.931 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:20:58.931 00:20:58.931 --- 10.0.0.4 ping statistics --- 00:20:58.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.931 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:58.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:20:58.931 00:20:58.931 --- 10.0.0.1 ping statistics --- 00:20:58.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.931 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:58.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:20:58.931 00:20:58.931 --- 10.0.0.2 ping statistics --- 00:20:58.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.931 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81333 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81333 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81333 ']' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.931 20:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:58.931 [2024-11-26 20:48:53.819151] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:58.931 [2024-11-26 20:48:53.819284] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.190 [2024-11-26 20:48:53.980633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:59.190 [2024-11-26 20:48:54.053932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.190 [2024-11-26 20:48:54.054009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.190 [2024-11-26 20:48:54.054025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.190 [2024-11-26 20:48:54.054038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.190 [2024-11-26 20:48:54.054050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.190 [2024-11-26 20:48:54.055196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.190 [2024-11-26 20:48:54.055210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.190 [2024-11-26 20:48:54.111383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81333 00:21:00.125 20:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:00.125 [2024-11-26 20:48:55.067618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.125 20:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:00.384 Malloc0 00:21:00.642 20:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:00.642 20:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:00.900 20:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:01.159 [2024-11-26 20:48:55.987114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:01.159 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:01.419 [2024-11-26 20:48:56.187217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81389 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81389 /var/tmp/bdevperf.sock 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81389 ']' 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.419 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:01.676 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.676 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:01.676 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:01.934 20:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:02.192 Nvme0n1 00:21:02.192 20:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:02.449 Nvme0n1 00:21:02.449 20:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:02.449 20:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:03.383 20:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:03.383 20:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:03.949 20:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:03.949 20:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:03.949 20:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81421 00:21:03.949 20:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:03.949 20:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81333 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:10.532 20:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:10.532 20:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:10.532 Attaching 4 probes... 00:21:10.532 @path[10.0.0.3, 4421]: 17897 00:21:10.532 @path[10.0.0.3, 4421]: 18461 00:21:10.532 @path[10.0.0.3, 4421]: 18406 00:21:10.532 @path[10.0.0.3, 4421]: 18296 00:21:10.532 @path[10.0.0.3, 4421]: 18307 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81421 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:10.532 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:10.790 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:10.790 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81540 00:21:10.790 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:10.790 20:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81333 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:17.348 Attaching 4 probes... 00:21:17.348 @path[10.0.0.3, 4420]: 20058 00:21:17.348 @path[10.0.0.3, 4420]: 17471 00:21:17.348 @path[10.0.0.3, 4420]: 16910 00:21:17.348 @path[10.0.0.3, 4420]: 17058 00:21:17.348 @path[10.0.0.3, 4420]: 17138 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81540 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:17.348 20:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:17.348 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:17.606 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:17.606 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81651 00:21:17.606 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81333 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:17.606 20:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:24.161 Attaching 4 probes... 00:21:24.161 @path[10.0.0.3, 4421]: 15840 00:21:24.161 @path[10.0.0.3, 4421]: 18491 00:21:24.161 @path[10.0.0.3, 4421]: 18517 00:21:24.161 @path[10.0.0.3, 4421]: 18462 00:21:24.161 @path[10.0.0.3, 4421]: 21045 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81651 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:24.161 20:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:24.418 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:24.418 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:24.418 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81765 00:21:24.418 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:24.418 20:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81333 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:30.978 Attaching 4 probes... 00:21:30.978 00:21:30.978 00:21:30.978 00:21:30.978 00:21:30.978 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81765 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:30.978 20:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:31.300 20:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:31.300 20:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81883 00:21:31.300 20:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:31.300 20:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81333 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:37.871 Attaching 4 probes... 00:21:37.871 @path[10.0.0.3, 4421]: 21647 00:21:37.871 @path[10.0.0.3, 4421]: 21992 00:21:37.871 @path[10.0.0.3, 4421]: 21693 00:21:37.871 @path[10.0.0.3, 4421]: 21704 00:21:37.871 @path[10.0.0.3, 4421]: 21928 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81883 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:37.871 20:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:38.807 20:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:38.807 20:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82001 00:21:38.807 20:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81333 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:38.807 20:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:45.394 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:45.395 20:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:45.395 Attaching 4 probes... 00:21:45.395 @path[10.0.0.3, 4420]: 21387 00:21:45.395 @path[10.0.0.3, 4420]: 20904 00:21:45.395 @path[10.0.0.3, 4420]: 19630 00:21:45.395 @path[10.0.0.3, 4420]: 21361 00:21:45.395 @path[10.0.0.3, 4420]: 21808 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82001 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:45.395 [2024-11-26 20:49:40.241035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:45.395 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:45.653 20:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:52.216 20:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:52.216 20:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81333 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:52.216 20:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82176 00:21:52.216 20:49:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:58.796 Attaching 4 probes... 00:21:58.796 @path[10.0.0.3, 4421]: 21606 00:21:58.796 @path[10.0.0.3, 4421]: 21928 00:21:58.796 @path[10.0.0.3, 4421]: 21928 00:21:58.796 @path[10.0.0.3, 4421]: 22079 00:21:58.796 @path[10.0.0.3, 4421]: 22015 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82176 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81389 00:21:58.796 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81389 ']' 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81389 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81389 00:21:58.797 killing process with pid 81389 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81389' 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81389 00:21:58.797 20:49:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81389 00:21:58.797 { 00:21:58.797 "results": [ 00:21:58.797 { 00:21:58.797 "job": "Nvme0n1", 00:21:58.797 "core_mask": "0x4", 00:21:58.797 "workload": "verify", 00:21:58.797 "status": "terminated", 00:21:58.797 "verify_range": { 00:21:58.797 "start": 0, 00:21:58.797 "length": 16384 00:21:58.797 }, 00:21:58.797 "queue_depth": 128, 00:21:58.797 "io_size": 4096, 00:21:58.797 "runtime": 55.402758, 00:21:58.797 "iops": 8707.99608929216, 00:21:58.797 "mibps": 34.0156097237975, 00:21:58.797 "io_failed": 0, 00:21:58.797 "io_timeout": 0, 00:21:58.797 "avg_latency_us": 14676.93998519255, 00:21:58.797 "min_latency_us": 674.8647619047618, 00:21:58.797 "max_latency_us": 7030452.419047619 00:21:58.797 } 00:21:58.797 ], 00:21:58.797 "core_count": 1 00:21:58.797 } 00:21:58.797 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81389 00:21:58.797 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:58.797 [2024-11-26 20:48:56.246482] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:58.797 [2024-11-26 20:48:56.246583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81389 ] 00:21:58.797 [2024-11-26 20:48:56.399432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.797 [2024-11-26 20:48:56.457141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.797 [2024-11-26 20:48:56.505041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:58.797 Running I/O for 90 seconds... 00:21:58.797 10007.00 IOPS, 39.09 MiB/s [2024-11-26T20:49:53.790Z] 9927.00 IOPS, 38.78 MiB/s [2024-11-26T20:49:53.790Z] 9676.67 IOPS, 37.80 MiB/s [2024-11-26T20:49:53.790Z] 9565.50 IOPS, 37.37 MiB/s [2024-11-26T20:49:53.790Z] 9492.40 IOPS, 37.08 MiB/s [2024-11-26T20:49:53.790Z] 9435.67 IOPS, 36.86 MiB/s [2024-11-26T20:49:53.790Z] 9397.43 IOPS, 36.71 MiB/s [2024-11-26T20:49:53.790Z] 9361.75 IOPS, 36.57 MiB/s [2024-11-26T20:49:53.790Z] [2024-11-26 20:49:05.596773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.596839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.596904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.596920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.596940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.596954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.596974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.596988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.797 [2024-11-26 20:49:05.597411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.797 [2024-11-26 20:49:05.597665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.797 [2024-11-26 20:49:05.597678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.597937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.597960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.597976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.798 [2024-11-26 20:49:05.598762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.798 [2024-11-26 20:49:05.598846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.798 [2024-11-26 20:49:05.598864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.598883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.598896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.598915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.598929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.598948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.598961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.598980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.598994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.799 [2024-11-26 20:49:05.599750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.599968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.599982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.600001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.600015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.600034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.600048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.600067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.799 [2024-11-26 20:49:05.600081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.799 [2024-11-26 20:49:05.600101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.600114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.600151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.600645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.600659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.601998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.800 [2024-11-26 20:49:05.602028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.602964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.602978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.800 [2024-11-26 20:49:05.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.800 [2024-11-26 20:49:05.603015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.800 9419.44 IOPS, 36.79 MiB/s [2024-11-26T20:49:53.793Z] 9472.70 IOPS, 37.00 MiB/s [2024-11-26T20:49:53.793Z] 9381.00 IOPS, 36.64 MiB/s [2024-11-26T20:49:53.793Z] 9304.58 IOPS, 36.35 MiB/s [2024-11-26T20:49:53.793Z] 9246.69 IOPS, 36.12 MiB/s [2024-11-26T20:49:53.793Z] 9209.64 IOPS, 35.98 MiB/s [2024-11-26T20:49:53.794Z] [2024-11-26 20:49:12.244898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.244984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.801 [2024-11-26 20:49:12.245685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.245724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.245766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.245806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.245845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.245884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.245923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.245962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.245984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.246000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.246027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.246044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.246067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.246083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.246106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.246122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.246144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.801 [2024-11-26 20:49:12.246172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.801 [2024-11-26 20:49:12.246202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.246218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.246257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.246296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.246335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.246957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.246976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.802 [2024-11-26 20:49:12.247021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.802 [2024-11-26 20:49:12.247721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.802 [2024-11-26 20:49:12.247738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.247777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.247822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.247846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.247862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.247885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.247901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.247925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.247941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.247964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.247980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.803 [2024-11-26 20:49:12.248758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.248981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.248997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.249020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.249046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.249069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.249085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.249108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.249125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.249148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.249175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.249198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.803 [2024-11-26 20:49:12.249225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.803 [2024-11-26 20:49:12.249254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.249594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.249632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.249672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.249712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.249751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.249790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.249835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.249858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.249874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.804 [2024-11-26 20:49:12.250555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.250966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.250999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.251019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.251052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.251081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.251114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.251134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:12.251182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:12.251202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.804 9018.07 IOPS, 35.23 MiB/s [2024-11-26T20:49:53.797Z] 8584.50 IOPS, 33.53 MiB/s [2024-11-26T20:49:53.797Z] 8623.53 IOPS, 33.69 MiB/s [2024-11-26T20:49:53.797Z] 8659.33 IOPS, 33.83 MiB/s [2024-11-26T20:49:53.797Z] 8690.11 IOPS, 33.95 MiB/s [2024-11-26T20:49:53.797Z] 8724.00 IOPS, 34.08 MiB/s [2024-11-26T20:49:53.797Z] 8835.43 IOPS, 34.51 MiB/s [2024-11-26T20:49:53.797Z] [2024-11-26 20:49:19.350066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:19.350139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:19.350214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:19.350231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:19.350250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:19.350264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:19.350283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:19.350297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:19.350316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:19.350329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:58.804 [2024-11-26 20:49:19.350347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.804 [2024-11-26 20:49:19.350360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.350705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.350986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.350999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.351031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.351063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.351095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.351127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.351169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.351201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.805 [2024-11-26 20:49:19.351234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.805 [2024-11-26 20:49:19.351562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.805 [2024-11-26 20:49:19.351580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.351593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.351625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.351657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.351688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.351719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.351756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.351788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.351822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.351856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.351887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.351919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.351950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.351969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.351982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.352014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.806 [2024-11-26 20:49:19.352046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:58.806 [2024-11-26 20:49:19.352562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.806 [2024-11-26 20:49:19.352575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.352618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.352650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.352682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.352714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.352977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.352995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.807 [2024-11-26 20:49:19.353375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:58.807 [2024-11-26 20:49:19.353748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.807 [2024-11-26 20:49:19.353761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.353780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.808 [2024-11-26 20:49:19.353793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.353812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.808 [2024-11-26 20:49:19.353825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.353851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.808 [2024-11-26 20:49:19.353865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.808 [2024-11-26 20:49:19.354452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.354971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.354996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.355009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.355037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.355050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:19.355075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:19.355088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:58.808 8866.55 IOPS, 34.63 MiB/s [2024-11-26T20:49:53.801Z] 8481.04 IOPS, 33.13 MiB/s [2024-11-26T20:49:53.801Z] 8127.67 IOPS, 31.75 MiB/s [2024-11-26T20:49:53.801Z] 7802.56 IOPS, 30.48 MiB/s [2024-11-26T20:49:53.801Z] 7502.46 IOPS, 29.31 MiB/s [2024-11-26T20:49:53.801Z] 7224.59 IOPS, 28.22 MiB/s [2024-11-26T20:49:53.801Z] 6966.57 IOPS, 27.21 MiB/s [2024-11-26T20:49:53.801Z] 6771.55 IOPS, 26.45 MiB/s [2024-11-26T20:49:53.801Z] 6907.70 IOPS, 26.98 MiB/s [2024-11-26T20:49:53.801Z] 7039.97 IOPS, 27.50 MiB/s [2024-11-26T20:49:53.801Z] 7160.47 IOPS, 27.97 MiB/s [2024-11-26T20:49:53.801Z] 7271.24 IOPS, 28.40 MiB/s [2024-11-26T20:49:53.801Z] 7380.21 IOPS, 28.83 MiB/s [2024-11-26T20:49:53.801Z] 7482.03 IOPS, 29.23 MiB/s [2024-11-26T20:49:53.801Z] [2024-11-26 20:49:32.694302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:58.808 [2024-11-26 20:49:32.694968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.808 [2024-11-26 20:49:32.694987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.695028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.695081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.695123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.695855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.695943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.695965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.695983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.809 [2024-11-26 20:49:32.696548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.809 [2024-11-26 20:49:32.696587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.809 [2024-11-26 20:49:32.696606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.696624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.696662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.696709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.696748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.696786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.696824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.696862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.696899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.696938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.696974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.696993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.810 [2024-11-26 20:49:32.697803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.810 [2024-11-26 20:49:32.697973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.810 [2024-11-26 20:49:32.697993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.811 [2024-11-26 20:49:32.698526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.698965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.698985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.699024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.699061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.699097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.699132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.811 [2024-11-26 20:49:32.699170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4310 is same with the state(6) to be set 00:21:58.811 [2024-11-26 20:49:32.699226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.811 [2024-11-26 20:49:32.699239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.811 [2024-11-26 20:49:32.699253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16752 len:8 PRP1 0x0 PRP2 0x0 00:21:58.811 [2024-11-26 20:49:32.699270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.811 [2024-11-26 20:49:32.699313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.811 [2024-11-26 20:49:32.699327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17272 len:8 PRP1 0x0 PRP2 0x0 00:21:58.811 [2024-11-26 20:49:32.699354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.811 [2024-11-26 20:49:32.699372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.811 [2024-11-26 20:49:32.699385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17288 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17296 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17304 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17320 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17328 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17336 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.699949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.699961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.699975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17352 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.699993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.700012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.700025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.700038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17360 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.700058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.700076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.700089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.700102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17368 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.700121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.700139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.700151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.700181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.700200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.700218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.700231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.700248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17384 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.700267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.700285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.812 [2024-11-26 20:49:32.700298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.812 [2024-11-26 20:49:32.700312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17392 len:8 PRP1 0x0 PRP2 0x0 00:21:58.812 [2024-11-26 20:49:32.700330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.701527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:58.812 [2024-11-26 20:49:32.701640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.812 [2024-11-26 20:49:32.701667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.812 [2024-11-26 20:49:32.701705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20151e0 (9): Bad file descriptor 00:21:58.812 [2024-11-26 20:49:32.702123] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:58.812 [2024-11-26 20:49:32.702176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20151e0 with addr=10.0.0.3, port=4421 00:21:58.812 [2024-11-26 20:49:32.702198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20151e0 is same with the state(6) to be set 00:21:58.812 [2024-11-26 20:49:32.702278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20151e0 (9): Bad file descriptor 00:21:58.812 [2024-11-26 20:49:32.702314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:58.812 [2024-11-26 20:49:32.702333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:58.812 [2024-11-26 20:49:32.702352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:58.812 [2024-11-26 20:49:32.702369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:58.812 [2024-11-26 20:49:32.702389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:58.812 7571.78 IOPS, 29.58 MiB/s [2024-11-26T20:49:53.805Z] 7651.89 IOPS, 29.89 MiB/s [2024-11-26T20:49:53.805Z] 7736.84 IOPS, 30.22 MiB/s [2024-11-26T20:49:53.805Z] 7806.77 IOPS, 30.50 MiB/s [2024-11-26T20:49:53.805Z] 7852.20 IOPS, 30.67 MiB/s [2024-11-26T20:49:53.805Z] 7927.22 IOPS, 30.97 MiB/s [2024-11-26T20:49:53.805Z] 7998.10 IOPS, 31.24 MiB/s [2024-11-26T20:49:53.805Z] 8059.16 IOPS, 31.48 MiB/s [2024-11-26T20:49:53.805Z] 8118.36 IOPS, 31.71 MiB/s [2024-11-26T20:49:53.805Z] 8178.31 IOPS, 31.95 MiB/s [2024-11-26T20:49:53.805Z] [2024-11-26 20:49:42.756938] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:58.812 8239.63 IOPS, 32.19 MiB/s [2024-11-26T20:49:53.805Z] 8299.64 IOPS, 32.42 MiB/s [2024-11-26T20:49:53.805Z] 8357.94 IOPS, 32.65 MiB/s [2024-11-26T20:49:53.805Z] 8413.29 IOPS, 32.86 MiB/s [2024-11-26T20:49:53.805Z] 8462.62 IOPS, 33.06 MiB/s [2024-11-26T20:49:53.805Z] 8511.90 IOPS, 33.25 MiB/s [2024-11-26T20:49:53.805Z] 8559.33 IOPS, 33.43 MiB/s [2024-11-26T20:49:53.805Z] 8605.53 IOPS, 33.62 MiB/s [2024-11-26T20:49:53.805Z] 8650.41 IOPS, 33.79 MiB/s [2024-11-26T20:49:53.805Z] 8693.44 IOPS, 33.96 MiB/s [2024-11-26T20:49:53.805Z] Received shutdown signal, test time was about 55.403419 seconds 00:21:58.812 00:21:58.812 Latency(us) 00:21:58.813 [2024-11-26T20:49:53.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.813 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.813 Verification LBA range: start 0x0 length 0x4000 00:21:58.813 Nvme0n1 : 55.40 8708.00 34.02 0.00 0.00 14676.94 674.86 7030452.42 00:21:58.813 [2024-11-26T20:49:53.806Z] =================================================================================================================== 00:21:58.813 [2024-11-26T20:49:53.806Z] Total : 8708.00 34.02 0.00 0.00 14676.94 674.86 7030452.42 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.813 rmmod nvme_tcp 00:21:58.813 rmmod nvme_fabrics 00:21:58.813 rmmod nvme_keyring 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81333 ']' 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81333 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81333 ']' 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81333 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81333 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.813 killing process with pid 81333 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81333' 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81333 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81333 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:58.813 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:59.072 00:21:59.072 real 1m0.891s 00:21:59.072 user 2m42.215s 00:21:59.072 sys 0m24.146s 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.072 20:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:59.072 ************************************ 00:21:59.072 END TEST nvmf_host_multipath 00:21:59.072 ************************************ 00:21:59.072 20:49:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:59.072 20:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.072 20:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.072 20:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.072 ************************************ 00:21:59.072 START TEST nvmf_timeout 00:21:59.072 ************************************ 00:21:59.072 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:59.332 * Looking for test storage... 00:21:59.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.332 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.333 --rc genhtml_branch_coverage=1 00:21:59.333 --rc genhtml_function_coverage=1 00:21:59.333 --rc genhtml_legend=1 00:21:59.333 --rc geninfo_all_blocks=1 00:21:59.333 --rc geninfo_unexecuted_blocks=1 00:21:59.333 00:21:59.333 ' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.333 --rc genhtml_branch_coverage=1 00:21:59.333 --rc genhtml_function_coverage=1 00:21:59.333 --rc genhtml_legend=1 00:21:59.333 --rc geninfo_all_blocks=1 00:21:59.333 --rc geninfo_unexecuted_blocks=1 00:21:59.333 00:21:59.333 ' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.333 --rc genhtml_branch_coverage=1 00:21:59.333 --rc genhtml_function_coverage=1 00:21:59.333 --rc genhtml_legend=1 00:21:59.333 --rc geninfo_all_blocks=1 00:21:59.333 --rc geninfo_unexecuted_blocks=1 00:21:59.333 00:21:59.333 ' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.333 --rc genhtml_branch_coverage=1 00:21:59.333 --rc genhtml_function_coverage=1 00:21:59.333 --rc genhtml_legend=1 00:21:59.333 --rc geninfo_all_blocks=1 00:21:59.333 --rc geninfo_unexecuted_blocks=1 00:21:59.333 00:21:59.333 ' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:59.333 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:59.334 Cannot find device "nvmf_init_br" 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:59.334 Cannot find device "nvmf_init_br2" 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:59.334 Cannot find device "nvmf_tgt_br" 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:59.334 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.593 Cannot find device "nvmf_tgt_br2" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:59.593 Cannot find device "nvmf_init_br" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:59.593 Cannot find device "nvmf_init_br2" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:59.593 Cannot find device "nvmf_tgt_br" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:59.593 Cannot find device "nvmf_tgt_br2" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:59.593 Cannot find device "nvmf_br" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:59.593 Cannot find device "nvmf_init_if" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:59.593 Cannot find device "nvmf_init_if2" 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.593 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.594 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:59.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:21:59.853 00:21:59.853 --- 10.0.0.3 ping statistics --- 00:21:59.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.853 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:59.853 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:59.853 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:21:59.853 00:21:59.853 --- 10.0.0.4 ping statistics --- 00:21:59.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.853 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:59.853 00:21:59.853 --- 10.0.0.1 ping statistics --- 00:21:59.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.853 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:59.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:21:59.853 00:21:59.853 --- 10.0.0.2 ping statistics --- 00:21:59.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.853 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82547 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82547 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82547 ']' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.853 20:49:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:59.853 [2024-11-26 20:49:54.769702] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:59.853 [2024-11-26 20:49:54.770302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.111 [2024-11-26 20:49:54.913087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:00.111 [2024-11-26 20:49:54.973604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.111 [2024-11-26 20:49:54.973661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.111 [2024-11-26 20:49:54.973672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.111 [2024-11-26 20:49:54.973681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.111 [2024-11-26 20:49:54.973688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.111 [2024-11-26 20:49:54.974675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.111 [2024-11-26 20:49:54.975093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.111 [2024-11-26 20:49:55.021641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:00.111 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.111 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:00.111 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.111 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.111 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.370 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.370 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.370 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:00.628 [2024-11-26 20:49:55.422713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.628 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:00.887 Malloc0 00:22:00.887 20:49:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.146 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.405 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:01.664 [2024-11-26 20:49:56.566216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82593 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82593 /var/tmp/bdevperf.sock 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82593 ']' 00:22:01.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.664 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:01.664 [2024-11-26 20:49:56.621443] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:01.664 [2024-11-26 20:49:56.621522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82593 ] 00:22:01.923 [2024-11-26 20:49:56.766630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.923 [2024-11-26 20:49:56.823087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.923 [2024-11-26 20:49:56.874417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:02.182 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.182 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:02.182 20:49:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:02.440 20:49:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:02.699 NVMe0n1 00:22:02.699 20:49:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82605 00:22:02.699 20:49:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.699 20:49:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:02.699 Running I/O for 10 seconds... 00:22:03.635 20:49:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:03.896 10404.00 IOPS, 40.64 MiB/s [2024-11-26T20:49:58.889Z] [2024-11-26 20:49:58.719060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165eb10 is same with the state(6) to be set 00:22:03.896 [2024-11-26 20:49:58.719669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165eb10 is same with the state(6) to be set 00:22:03.896 [2024-11-26 20:49:58.719758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165eb10 is same with the state(6) to be set 00:22:03.896 [2024-11-26 20:49:58.719804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165eb10 is same with the state(6) to be set 00:22:03.896 [2024-11-26 20:49:58.719844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165eb10 is same with the state(6) to be set 00:22:03.896 [2024-11-26 20:49:58.719959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.896 [2024-11-26 20:49:58.720333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.720458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.896 [2024-11-26 20:49:58.720515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.720566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.896 [2024-11-26 20:49:58.720624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.720673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.896 [2024-11-26 20:49:58.720732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.720780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.896 [2024-11-26 20:49:58.720840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.720889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.896 [2024-11-26 20:49:58.720943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.720985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.896 [2024-11-26 20:49:58.721798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.896 [2024-11-26 20:49:58.721890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.896 [2024-11-26 20:49:58.721939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.721992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.897 [2024-11-26 20:49:58.722745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.897 [2024-11-26 20:49:58.722921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.897 [2024-11-26 20:49:58.722932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.722942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.722954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.722964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.722975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.722985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.722996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.898 [2024-11-26 20:49:58.723441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.898 [2024-11-26 20:49:58.723760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.898 [2024-11-26 20:49:58.723773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.723783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.723804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.723825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.723845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.723868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.723889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.723915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.723949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.723979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.723997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.724012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.724034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.724050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.724065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.724074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.724092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.724106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.724125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.724138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.729653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.729737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.729783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.729832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.729878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.729921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.729960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.730006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.730087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.730205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.899 [2024-11-26 20:49:58.730307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.730415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.730497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.730601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.730694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.730776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.730869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.730917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.730966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.731013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.731054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.731094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.731149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.731209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.731276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.731328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.731379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.731421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.731472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.731514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.899 [2024-11-26 20:49:58.731565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.731613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171f970 is same with the state(6) to be set 00:22:03.899 [2024-11-26 20:49:58.731665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.899 [2024-11-26 20:49:58.731730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.899 [2024-11-26 20:49:58.731772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98696 len:8 PRP1 0x0 PRP2 0x0 00:22:03.899 [2024-11-26 20:49:58.731818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.732004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.899 [2024-11-26 20:49:58.732116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.732180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.899 [2024-11-26 20:49:58.732247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.732296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.899 [2024-11-26 20:49:58.732338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.899 [2024-11-26 20:49:58.732378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.900 [2024-11-26 20:49:58.732426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.900 [2024-11-26 20:49:58.732471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfe50 is same with the state(6) to be set 00:22:03.900 [2024-11-26 20:49:58.732756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:03.900 [2024-11-26 20:49:58.732848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfe50 (9): Bad file descriptor 00:22:03.900 [2024-11-26 20:49:58.732990] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.900 [2024-11-26 20:49:58.733070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfe50 with addr=10.0.0.3, port=4420 00:22:03.900 [2024-11-26 20:49:58.733122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfe50 is same with the state(6) to be set 00:22:03.900 [2024-11-26 20:49:58.733183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfe50 (9): Bad file descriptor 00:22:03.900 [2024-11-26 20:49:58.733233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:03.900 [2024-11-26 20:49:58.733285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:03.900 [2024-11-26 20:49:58.733335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:03.900 [2024-11-26 20:49:58.733404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:03.900 [2024-11-26 20:49:58.733465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:03.900 20:49:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:05.787 6105.00 IOPS, 23.85 MiB/s [2024-11-26T20:50:00.780Z] 4070.00 IOPS, 15.90 MiB/s [2024-11-26T20:50:00.780Z] [2024-11-26 20:50:00.733656] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.787 [2024-11-26 20:50:00.733720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfe50 with addr=10.0.0.3, port=4420 00:22:05.787 [2024-11-26 20:50:00.733734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfe50 is same with the state(6) to be set 00:22:05.787 [2024-11-26 20:50:00.733758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfe50 (9): Bad file descriptor 00:22:05.787 [2024-11-26 20:50:00.733775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:05.787 [2024-11-26 20:50:00.733785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:05.787 [2024-11-26 20:50:00.733798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:05.787 [2024-11-26 20:50:00.733809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:05.787 [2024-11-26 20:50:00.733822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:05.787 20:50:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:05.787 20:50:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.787 20:50:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:06.045 20:50:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:06.045 20:50:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:06.045 20:50:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:06.045 20:50:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:06.612 20:50:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:06.612 20:50:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:07.988 3052.50 IOPS, 11.92 MiB/s [2024-11-26T20:50:02.981Z] 2442.00 IOPS, 9.54 MiB/s [2024-11-26T20:50:02.982Z] [2024-11-26 20:50:02.733973] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.989 [2024-11-26 20:50:02.734030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfe50 with addr=10.0.0.3, port=4420 00:22:07.989 [2024-11-26 20:50:02.734044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfe50 is same with the state(6) to be set 00:22:07.989 [2024-11-26 20:50:02.734066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfe50 (9): Bad file descriptor 00:22:07.989 [2024-11-26 20:50:02.734084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:07.989 [2024-11-26 20:50:02.734094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:07.989 [2024-11-26 20:50:02.734105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:07.989 [2024-11-26 20:50:02.734116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:07.989 [2024-11-26 20:50:02.734129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:09.864 2035.00 IOPS, 7.95 MiB/s [2024-11-26T20:50:04.857Z] 1744.29 IOPS, 6.81 MiB/s [2024-11-26T20:50:04.857Z] [2024-11-26 20:50:04.734181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:09.864 [2024-11-26 20:50:04.734230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:09.864 [2024-11-26 20:50:04.734241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:09.864 [2024-11-26 20:50:04.734252] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:09.864 [2024-11-26 20:50:04.734264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:10.798 1526.25 IOPS, 5.96 MiB/s 00:22:10.798 Latency(us) 00:22:10.798 [2024-11-26T20:50:05.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.798 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.798 Verification LBA range: start 0x0 length 0x4000 00:22:10.798 NVMe0n1 : 8.17 1494.00 5.84 15.66 0.00 84722.56 2808.69 7030452.42 00:22:10.798 [2024-11-26T20:50:05.791Z] =================================================================================================================== 00:22:10.798 [2024-11-26T20:50:05.791Z] Total : 1494.00 5.84 15.66 0.00 84722.56 2808.69 7030452.42 00:22:10.798 { 00:22:10.798 "results": [ 00:22:10.798 { 00:22:10.798 "job": "NVMe0n1", 00:22:10.798 "core_mask": "0x4", 00:22:10.798 "workload": "verify", 00:22:10.798 "status": "finished", 00:22:10.798 "verify_range": { 00:22:10.798 "start": 0, 00:22:10.798 "length": 16384 00:22:10.798 }, 00:22:10.798 "queue_depth": 128, 00:22:10.798 "io_size": 4096, 00:22:10.798 "runtime": 8.172712, 00:22:10.798 "iops": 1493.996117812545, 00:22:10.798 "mibps": 5.835922335205254, 00:22:10.798 "io_failed": 128, 00:22:10.798 "io_timeout": 0, 00:22:10.798 "avg_latency_us": 84722.56079537474, 00:22:10.798 "min_latency_us": 2808.6857142857143, 00:22:10.798 "max_latency_us": 7030452.419047619 00:22:10.798 } 00:22:10.798 ], 00:22:10.798 "core_count": 1 00:22:10.798 } 00:22:11.364 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:11.364 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:11.364 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.622 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:11.622 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:11.622 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:11.622 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82605 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82593 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82593 ']' 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82593 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82593 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82593' 00:22:11.881 killing process with pid 82593 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82593 00:22:11.881 Received shutdown signal, test time was about 9.282158 seconds 00:22:11.881 00:22:11.881 Latency(us) 00:22:11.881 [2024-11-26T20:50:06.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.881 [2024-11-26T20:50:06.874Z] =================================================================================================================== 00:22:11.881 [2024-11-26T20:50:06.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.881 20:50:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82593 00:22:12.138 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:12.396 [2024-11-26 20:50:07.208387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82728 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82728 /var/tmp/bdevperf.sock 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82728 ']' 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.396 20:50:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:12.396 [2024-11-26 20:50:07.262544] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:12.397 [2024-11-26 20:50:07.262614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82728 ] 00:22:12.655 [2024-11-26 20:50:07.404888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.655 [2024-11-26 20:50:07.456902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.655 [2024-11-26 20:50:07.498852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:13.221 20:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.221 20:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:13.221 20:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:13.479 20:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:13.738 NVMe0n1 00:22:13.738 20:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82751 00:22:13.738 20:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.738 20:50:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:13.996 Running I/O for 10 seconds... 00:22:14.964 20:50:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:14.964 11045.00 IOPS, 43.14 MiB/s [2024-11-26T20:50:09.957Z] [2024-11-26 20:50:09.808248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-11-26 20:50:09.808305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.964 [2024-11-26 20:50:09.808326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-11-26 20:50:09.808336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.964 [2024-11-26 20:50:09.808348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-11-26 20:50:09.808358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.964 [2024-11-26 20:50:09.808369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-11-26 20:50:09.808377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.964 [2024-11-26 20:50:09.808388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.808934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.808986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.808994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.809014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.809033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.809052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-11-26 20:50:09.809089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.809108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.809127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.809145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-11-26 20:50:09.809165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-11-26 20:50:09.809175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-11-26 20:50:09.809789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.966 [2024-11-26 20:50:09.809932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-11-26 20:50:09.809940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.809950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.809959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.809970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.809978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.809989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.809998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-11-26 20:50:09.810503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.967 [2024-11-26 20:50:09.810725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-11-26 20:50:09.810734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.968 [2024-11-26 20:50:09.810744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.968 [2024-11-26 20:50:09.810753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.968 [2024-11-26 20:50:09.810764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.968 [2024-11-26 20:50:09.810773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.968 [2024-11-26 20:50:09.810783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.968 [2024-11-26 20:50:09.810792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.968 [2024-11-26 20:50:09.810801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941970 is same with the state(6) to be set 00:22:14.968 [2024-11-26 20:50:09.810813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.968 [2024-11-26 20:50:09.810820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.968 [2024-11-26 20:50:09.810828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95168 len:8 PRP1 0x0 PRP2 0x0 00:22:14.968 [2024-11-26 20:50:09.810838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.968 [2024-11-26 20:50:09.811094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:14.968 [2024-11-26 20:50:09.811170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:14.968 [2024-11-26 20:50:09.811257] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.968 [2024-11-26 20:50:09.811273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1e50 with addr=10.0.0.3, port=4420 00:22:14.968 [2024-11-26 20:50:09.811282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1e50 is same with the state(6) to be set 00:22:14.968 [2024-11-26 20:50:09.811296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:14.968 [2024-11-26 20:50:09.811318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:14.968 [2024-11-26 20:50:09.811327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:14.968 [2024-11-26 20:50:09.811338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:14.968 [2024-11-26 20:50:09.811348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:14.968 [2024-11-26 20:50:09.811361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:14.968 20:50:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:15.904 5920.50 IOPS, 23.13 MiB/s [2024-11-26T20:50:10.897Z] [2024-11-26 20:50:10.811496] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.904 [2024-11-26 20:50:10.811546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1e50 with addr=10.0.0.3, port=4420 00:22:15.904 [2024-11-26 20:50:10.811560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1e50 is same with the state(6) to be set 00:22:15.904 [2024-11-26 20:50:10.811584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:15.904 [2024-11-26 20:50:10.811602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:15.904 [2024-11-26 20:50:10.811612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:15.904 [2024-11-26 20:50:10.811624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:15.904 [2024-11-26 20:50:10.811635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:15.904 [2024-11-26 20:50:10.811646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:15.904 20:50:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:16.162 [2024-11-26 20:50:11.076119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:16.162 20:50:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82751 00:22:17.096 3947.00 IOPS, 15.42 MiB/s [2024-11-26T20:50:12.089Z] [2024-11-26 20:50:11.829272] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:18.965 2960.25 IOPS, 11.56 MiB/s [2024-11-26T20:50:14.894Z] 4573.20 IOPS, 17.86 MiB/s [2024-11-26T20:50:15.831Z] 5777.67 IOPS, 22.57 MiB/s [2024-11-26T20:50:16.767Z] 6645.14 IOPS, 25.96 MiB/s [2024-11-26T20:50:18.141Z] 7298.25 IOPS, 28.51 MiB/s [2024-11-26T20:50:19.119Z] 7799.56 IOPS, 30.47 MiB/s [2024-11-26T20:50:19.119Z] 8198.60 IOPS, 32.03 MiB/s 00:22:24.126 Latency(us) 00:22:24.126 [2024-11-26T20:50:19.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.126 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.126 Verification LBA range: start 0x0 length 0x4000 00:22:24.126 NVMe0n1 : 10.01 8203.25 32.04 0.00 0.00 15579.80 2449.80 3019898.88 00:22:24.126 [2024-11-26T20:50:19.119Z] =================================================================================================================== 00:22:24.126 [2024-11-26T20:50:19.119Z] Total : 8203.25 32.04 0.00 0.00 15579.80 2449.80 3019898.88 00:22:24.126 { 00:22:24.126 "results": [ 00:22:24.126 { 00:22:24.126 "job": "NVMe0n1", 00:22:24.126 "core_mask": "0x4", 00:22:24.126 "workload": "verify", 00:22:24.126 "status": "finished", 00:22:24.126 "verify_range": { 00:22:24.126 "start": 0, 00:22:24.126 "length": 16384 00:22:24.126 }, 00:22:24.126 "queue_depth": 128, 00:22:24.126 "io_size": 4096, 00:22:24.126 "runtime": 10.009939, 00:22:24.126 "iops": 8203.246793012426, 00:22:24.126 "mibps": 32.04393278520479, 00:22:24.126 "io_failed": 0, 00:22:24.126 "io_timeout": 0, 00:22:24.126 "avg_latency_us": 15579.798101083627, 00:22:24.126 "min_latency_us": 2449.7980952380954, 00:22:24.126 "max_latency_us": 3019898.88 00:22:24.126 } 00:22:24.126 ], 00:22:24.126 "core_count": 1 00:22:24.126 } 00:22:24.126 20:50:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82856 00:22:24.126 20:50:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:24.126 20:50:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.126 Running I/O for 10 seconds... 00:22:25.060 20:50:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:25.060 9857.00 IOPS, 38.50 MiB/s [2024-11-26T20:50:20.053Z] [2024-11-26 20:50:20.026699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165f230 is same with the state(6) to be set 00:22:25.060 [2024-11-26 20:50:20.026768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165f230 is same with the state(6) to be set 00:22:25.060 [2024-11-26 20:50:20.026779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165f230 is same with the state(6) to be set 00:22:25.060 [2024-11-26 20:50:20.027029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.060 [2024-11-26 20:50:20.027061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.060 [2024-11-26 20:50:20.027093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.060 [2024-11-26 20:50:20.027105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.060 [2024-11-26 20:50:20.027123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.060 [2024-11-26 20:50:20.027134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.060 [2024-11-26 20:50:20.027146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.060 [2024-11-26 20:50:20.027168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.061 [2024-11-26 20:50:20.027853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.061 [2024-11-26 20:50:20.027926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.061 [2024-11-26 20:50:20.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.027946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.027956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.027967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.027976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.027987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.027997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.062 [2024-11-26 20:50:20.028682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.062 [2024-11-26 20:50:20.028754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.062 [2024-11-26 20:50:20.028763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.028785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.028805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.028826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.028846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.028866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.028887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.028907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.028928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.028949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.028970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.028981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.063 [2024-11-26 20:50:20.029195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.029216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.029237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.029264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.029284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.029305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.029355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.063 [2024-11-26 20:50:20.029391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193ffd0 is same with the state(6) to be set 00:22:25.063 [2024-11-26 20:50:20.029414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.063 [2024-11-26 20:50:20.029422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.063 [2024-11-26 20:50:20.029431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90752 len:8 PRP1 0x0 PRP2 0x0 00:22:25.063 [2024-11-26 20:50:20.029440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.063 [2024-11-26 20:50:20.029459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.063 [2024-11-26 20:50:20.029468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91272 len:8 PRP1 0x0 PRP2 0x0 00:22:25.063 [2024-11-26 20:50:20.029477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.063 [2024-11-26 20:50:20.029511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.063 [2024-11-26 20:50:20.029519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91280 len:8 PRP1 0x0 PRP2 0x0 00:22:25.063 [2024-11-26 20:50:20.029530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.063 [2024-11-26 20:50:20.029559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.063 [2024-11-26 20:50:20.029567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91288 len:8 PRP1 0x0 PRP2 0x0 00:22:25.063 [2024-11-26 20:50:20.029576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.063 [2024-11-26 20:50:20.029593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.063 [2024-11-26 20:50:20.029618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91296 len:8 PRP1 0x0 PRP2 0x0 00:22:25.063 [2024-11-26 20:50:20.029628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.063 [2024-11-26 20:50:20.029648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.063 [2024-11-26 20:50:20.029656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91304 len:8 PRP1 0x0 PRP2 0x0 00:22:25.063 [2024-11-26 20:50:20.029666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.063 [2024-11-26 20:50:20.029677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.063 [2024-11-26 20:50:20.029686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.063 [2024-11-26 20:50:20.029694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91312 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91320 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91328 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91336 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91344 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91352 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91360 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91368 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.029966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.029976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.029984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.029992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91376 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.030002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.030012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.030020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.030031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91384 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.030040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.030051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 20:50:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:25.064 [2024-11-26 20:50:20.044448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.044473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91392 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.044488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.044504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.044515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.044527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91400 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.044541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.044556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.044567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.044579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91408 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.044593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.044607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.044618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.044630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91416 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.044643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.044658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.064 [2024-11-26 20:50:20.044669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.064 [2024-11-26 20:50:20.044680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91424 len:8 PRP1 0x0 PRP2 0x0 00:22:25.064 [2024-11-26 20:50:20.044694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.044910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.064 [2024-11-26 20:50:20.044935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.044951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.064 [2024-11-26 20:50:20.044965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.044980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.064 [2024-11-26 20:50:20.044994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.045008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.064 [2024-11-26 20:50:20.045022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.064 [2024-11-26 20:50:20.045036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1e50 is same with the state(6) to be set 00:22:25.064 [2024-11-26 20:50:20.045348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:25.064 [2024-11-26 20:50:20.045382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:25.064 [2024-11-26 20:50:20.045533] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.064 [2024-11-26 20:50:20.045567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1e50 with addr=10.0.0.3, port=4420 00:22:25.064 [2024-11-26 20:50:20.045582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1e50 is same with the state(6) to be set 00:22:25.064 [2024-11-26 20:50:20.045605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:25.064 [2024-11-26 20:50:20.045627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:25.064 [2024-11-26 20:50:20.045642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:25.064 [2024-11-26 20:50:20.045658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:25.064 [2024-11-26 20:50:20.045672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:25.064 [2024-11-26 20:50:20.045687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:26.256 5650.50 IOPS, 22.07 MiB/s [2024-11-26T20:50:21.249Z] [2024-11-26 20:50:21.045854] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.256 [2024-11-26 20:50:21.045928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1e50 with addr=10.0.0.3, port=4420 00:22:26.257 [2024-11-26 20:50:21.045943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1e50 is same with the state(6) to be set 00:22:26.257 [2024-11-26 20:50:21.045965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:26.257 [2024-11-26 20:50:21.045983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:26.257 [2024-11-26 20:50:21.045993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:26.257 [2024-11-26 20:50:21.046005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:26.257 [2024-11-26 20:50:21.046016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:26.257 [2024-11-26 20:50:21.046027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:27.193 3767.00 IOPS, 14.71 MiB/s [2024-11-26T20:50:22.186Z] [2024-11-26 20:50:22.046171] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.193 [2024-11-26 20:50:22.046343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1e50 with addr=10.0.0.3, port=4420 00:22:27.193 [2024-11-26 20:50:22.046446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1e50 is same with the state(6) to be set 00:22:27.193 [2024-11-26 20:50:22.046512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:27.193 [2024-11-26 20:50:22.046580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:27.193 [2024-11-26 20:50:22.046683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:27.193 [2024-11-26 20:50:22.046733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:27.193 [2024-11-26 20:50:22.046765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:27.193 [2024-11-26 20:50:22.046813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:28.127 2825.25 IOPS, 11.04 MiB/s [2024-11-26T20:50:23.120Z] 20:50:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:28.127 [2024-11-26 20:50:23.048801] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.127 [2024-11-26 20:50:23.048954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e1e50 with addr=10.0.0.3, port=4420 00:22:28.127 [2024-11-26 20:50:23.049060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1e50 is same with the state(6) to be set 00:22:28.127 [2024-11-26 20:50:23.049360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1e50 (9): Bad file descriptor 00:22:28.127 [2024-11-26 20:50:23.049666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:28.127 [2024-11-26 20:50:23.049765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:28.127 [2024-11-26 20:50:23.049817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:28.127 [2024-11-26 20:50:23.049848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:28.127 [2024-11-26 20:50:23.049898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:28.385 [2024-11-26 20:50:23.275010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:28.385 20:50:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82856 00:22:29.209 2260.20 IOPS, 8.83 MiB/s [2024-11-26T20:50:24.202Z] [2024-11-26 20:50:24.077207] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:31.090 3521.17 IOPS, 13.75 MiB/s [2024-11-26T20:50:27.019Z] 4730.86 IOPS, 18.48 MiB/s [2024-11-26T20:50:27.955Z] 5636.50 IOPS, 22.02 MiB/s [2024-11-26T20:50:29.361Z] 6339.11 IOPS, 24.76 MiB/s [2024-11-26T20:50:29.361Z] 6893.20 IOPS, 26.93 MiB/s 00:22:34.368 Latency(us) 00:22:34.368 [2024-11-26T20:50:29.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.368 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:34.368 Verification LBA range: start 0x0 length 0x4000 00:22:34.368 NVMe0n1 : 10.01 6897.45 26.94 5019.91 0.00 10721.75 503.22 3035877.18 00:22:34.368 [2024-11-26T20:50:29.361Z] =================================================================================================================== 00:22:34.368 [2024-11-26T20:50:29.361Z] Total : 6897.45 26.94 5019.91 0.00 10721.75 0.00 3035877.18 00:22:34.368 { 00:22:34.368 "results": [ 00:22:34.368 { 00:22:34.368 "job": "NVMe0n1", 00:22:34.368 "core_mask": "0x4", 00:22:34.368 "workload": "verify", 00:22:34.368 "status": "finished", 00:22:34.368 "verify_range": { 00:22:34.368 "start": 0, 00:22:34.368 "length": 16384 00:22:34.368 }, 00:22:34.368 "queue_depth": 128, 00:22:34.368 "io_size": 4096, 00:22:34.368 "runtime": 10.007754, 00:22:34.368 "iops": 6897.451715939461, 00:22:34.368 "mibps": 26.94317076538852, 00:22:34.368 "io_failed": 50238, 00:22:34.368 "io_timeout": 0, 00:22:34.368 "avg_latency_us": 10721.74567197932, 00:22:34.368 "min_latency_us": 503.22285714285715, 00:22:34.368 "max_latency_us": 3035877.180952381 00:22:34.368 } 00:22:34.368 ], 00:22:34.368 "core_count": 1 00:22:34.368 } 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82728 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82728 ']' 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82728 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82728 00:22:34.368 killing process with pid 82728 00:22:34.368 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.368 00:22:34.368 Latency(us) 00:22:34.368 [2024-11-26T20:50:29.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.368 [2024-11-26T20:50:29.361Z] =================================================================================================================== 00:22:34.368 [2024-11-26T20:50:29.361Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82728' 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82728 00:22:34.368 20:50:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82728 00:22:34.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82970 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82970 /var/tmp/bdevperf.sock 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82970 ']' 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.368 20:50:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:34.368 [2024-11-26 20:50:29.202344] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:34.368 [2024-11-26 20:50:29.203388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82970 ] 00:22:34.368 [2024-11-26 20:50:29.350621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.626 [2024-11-26 20:50:29.400484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.626 [2024-11-26 20:50:29.442200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:35.196 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.196 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:35.196 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82986 00:22:35.196 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82970 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:35.196 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:35.454 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:36.020 NVMe0n1 00:22:36.020 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=83024 00:22:36.020 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:36.020 20:50:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:36.020 Running I/O for 10 seconds... 00:22:36.955 20:50:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:37.216 18923.00 IOPS, 73.92 MiB/s [2024-11-26T20:50:32.209Z] [2024-11-26 20:50:32.035559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.216 [2024-11-26 20:50:32.035613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.216 [2024-11-26 20:50:32.035625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.216 [2024-11-26 20:50:32.035651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.035998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.217 [2024-11-26 20:50:32.036311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(6) to be set 00:22:37.218 [2024-11-26 20:50:32.036886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.036919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.036942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.036954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.036966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.036976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.036988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.036998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.218 [2024-11-26 20:50:32.037209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.218 [2024-11-26 20:50:32.037218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.037984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.037993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.038004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.038014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.038024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.038034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.219 [2024-11-26 20:50:32.038045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.219 [2024-11-26 20:50:32.038055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.220 [2024-11-26 20:50:32.038786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.220 [2024-11-26 20:50:32.038797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.038984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.038995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.221 [2024-11-26 20:50:32.039380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.221 [2024-11-26 20:50:32.039389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.222 [2024-11-26 20:50:32.039598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbce20 is same with the state(6) to be set 00:22:37.222 [2024-11-26 20:50:32.039622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.222 [2024-11-26 20:50:32.039630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.222 [2024-11-26 20:50:32.039639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:22:37.222 [2024-11-26 20:50:32.039648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.222 [2024-11-26 20:50:32.039964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:37.222 [2024-11-26 20:50:32.040058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4fe50 (9): Bad file descriptor 00:22:37.222 [2024-11-26 20:50:32.040171] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.222 [2024-11-26 20:50:32.040188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4fe50 with addr=10.0.0.3, port=4420 00:22:37.222 [2024-11-26 20:50:32.040199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4fe50 is same with the state(6) to be set 00:22:37.222 [2024-11-26 20:50:32.040214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4fe50 (9): Bad file descriptor 00:22:37.222 [2024-11-26 20:50:32.040230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:37.222 [2024-11-26 20:50:32.040239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:37.222 [2024-11-26 20:50:32.040251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:37.222 [2024-11-26 20:50:32.040261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:37.222 [2024-11-26 20:50:32.040272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:37.222 20:50:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 83024 00:22:39.093 10731.50 IOPS, 41.92 MiB/s [2024-11-26T20:50:34.086Z] 7154.33 IOPS, 27.95 MiB/s [2024-11-26T20:50:34.086Z] [2024-11-26 20:50:34.040464] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.093 [2024-11-26 20:50:34.040530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4fe50 with addr=10.0.0.3, port=4420 00:22:39.093 [2024-11-26 20:50:34.040545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4fe50 is same with the state(6) to be set 00:22:39.093 [2024-11-26 20:50:34.040576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4fe50 (9): Bad file descriptor 00:22:39.093 [2024-11-26 20:50:34.040594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:39.093 [2024-11-26 20:50:34.040604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:39.093 [2024-11-26 20:50:34.040616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:39.093 [2024-11-26 20:50:34.040627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:39.093 [2024-11-26 20:50:34.040639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:40.967 5365.75 IOPS, 20.96 MiB/s [2024-11-26T20:50:36.219Z] 4292.60 IOPS, 16.77 MiB/s [2024-11-26T20:50:36.219Z] [2024-11-26 20:50:36.040827] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.226 [2024-11-26 20:50:36.040886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4fe50 with addr=10.0.0.3, port=4420 00:22:41.226 [2024-11-26 20:50:36.040900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4fe50 is same with the state(6) to be set 00:22:41.227 [2024-11-26 20:50:36.040924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4fe50 (9): Bad file descriptor 00:22:41.227 [2024-11-26 20:50:36.040952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:41.227 [2024-11-26 20:50:36.040963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:41.227 [2024-11-26 20:50:36.040974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:41.227 [2024-11-26 20:50:36.040985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:41.227 [2024-11-26 20:50:36.040997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:43.099 3577.17 IOPS, 13.97 MiB/s [2024-11-26T20:50:38.092Z] 3066.14 IOPS, 11.98 MiB/s [2024-11-26T20:50:38.092Z] [2024-11-26 20:50:38.041087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:43.099 [2024-11-26 20:50:38.041132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:43.099 [2024-11-26 20:50:38.041144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:43.099 [2024-11-26 20:50:38.041162] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:43.099 [2024-11-26 20:50:38.041175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:44.292 2682.88 IOPS, 10.48 MiB/s 00:22:44.292 Latency(us) 00:22:44.292 [2024-11-26T20:50:39.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.292 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:44.292 NVMe0n1 : 8.15 2633.22 10.29 15.70 0.00 48330.40 6491.18 7030452.42 00:22:44.292 [2024-11-26T20:50:39.285Z] =================================================================================================================== 00:22:44.292 [2024-11-26T20:50:39.285Z] Total : 2633.22 10.29 15.70 0.00 48330.40 6491.18 7030452.42 00:22:44.292 { 00:22:44.292 "results": [ 00:22:44.292 { 00:22:44.292 "job": "NVMe0n1", 00:22:44.292 "core_mask": "0x4", 00:22:44.292 "workload": "randread", 00:22:44.292 "status": "finished", 00:22:44.292 "queue_depth": 128, 00:22:44.292 "io_size": 4096, 00:22:44.292 "runtime": 8.150858, 00:22:44.292 "iops": 2633.2197174825033, 00:22:44.292 "mibps": 10.286014521416028, 00:22:44.292 "io_failed": 128, 00:22:44.292 "io_timeout": 0, 00:22:44.292 "avg_latency_us": 48330.39521670184, 00:22:44.292 "min_latency_us": 6491.184761904762, 00:22:44.292 "max_latency_us": 7030452.419047619 00:22:44.292 } 00:22:44.292 ], 00:22:44.293 "core_count": 1 00:22:44.293 } 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:44.293 Attaching 5 probes... 00:22:44.293 1277.634213: reset bdev controller NVMe0 00:22:44.293 1277.797449: reconnect bdev controller NVMe0 00:22:44.293 3278.023383: reconnect delay bdev controller NVMe0 00:22:44.293 3278.046003: reconnect bdev controller NVMe0 00:22:44.293 5278.399026: reconnect delay bdev controller NVMe0 00:22:44.293 5278.418034: reconnect bdev controller NVMe0 00:22:44.293 7278.762590: reconnect delay bdev controller NVMe0 00:22:44.293 7278.779722: reconnect bdev controller NVMe0 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82986 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82970 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82970 ']' 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82970 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82970 00:22:44.293 killing process with pid 82970 00:22:44.293 Received shutdown signal, test time was about 8.227685 seconds 00:22:44.293 00:22:44.293 Latency(us) 00:22:44.293 [2024-11-26T20:50:39.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.293 [2024-11-26T20:50:39.286Z] =================================================================================================================== 00:22:44.293 [2024-11-26T20:50:39.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82970' 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82970 00:22:44.293 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82970 00:22:44.551 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.551 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:44.551 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:44.551 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.551 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.810 rmmod nvme_tcp 00:22:44.810 rmmod nvme_fabrics 00:22:44.810 rmmod nvme_keyring 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82547 ']' 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82547 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82547 ']' 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82547 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82547 00:22:44.810 killing process with pid 82547 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82547' 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82547 00:22:44.810 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82547 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:45.069 20:50:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:45.069 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:45.069 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:22:45.329 ************************************ 00:22:45.329 END TEST nvmf_timeout 00:22:45.329 ************************************ 00:22:45.329 00:22:45.329 real 0m46.136s 00:22:45.329 user 2m13.239s 00:22:45.329 sys 0m7.123s 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:45.329 ************************************ 00:22:45.329 END TEST nvmf_host 00:22:45.329 ************************************ 00:22:45.329 00:22:45.329 real 5m11.124s 00:22:45.329 user 13m8.514s 00:22:45.329 sys 1m28.286s 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.329 20:50:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.329 20:50:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:22:45.329 20:50:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:22:45.329 ************************************ 00:22:45.329 END TEST nvmf_tcp 00:22:45.329 ************************************ 00:22:45.329 00:22:45.329 real 13m9.145s 00:22:45.329 user 30m43.507s 00:22:45.329 sys 3m57.821s 00:22:45.329 20:50:40 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.329 20:50:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.329 20:50:40 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:22:45.329 20:50:40 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:45.329 20:50:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:45.329 20:50:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.329 20:50:40 -- common/autotest_common.sh@10 -- # set +x 00:22:45.589 ************************************ 00:22:45.589 START TEST nvmf_dif 00:22:45.589 ************************************ 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:45.589 * Looking for test storage... 00:22:45.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:45.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.589 --rc genhtml_branch_coverage=1 00:22:45.589 --rc genhtml_function_coverage=1 00:22:45.589 --rc genhtml_legend=1 00:22:45.589 --rc geninfo_all_blocks=1 00:22:45.589 --rc geninfo_unexecuted_blocks=1 00:22:45.589 00:22:45.589 ' 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:45.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.589 --rc genhtml_branch_coverage=1 00:22:45.589 --rc genhtml_function_coverage=1 00:22:45.589 --rc genhtml_legend=1 00:22:45.589 --rc geninfo_all_blocks=1 00:22:45.589 --rc geninfo_unexecuted_blocks=1 00:22:45.589 00:22:45.589 ' 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:45.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.589 --rc genhtml_branch_coverage=1 00:22:45.589 --rc genhtml_function_coverage=1 00:22:45.589 --rc genhtml_legend=1 00:22:45.589 --rc geninfo_all_blocks=1 00:22:45.589 --rc geninfo_unexecuted_blocks=1 00:22:45.589 00:22:45.589 ' 00:22:45.589 20:50:40 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:45.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.589 --rc genhtml_branch_coverage=1 00:22:45.589 --rc genhtml_function_coverage=1 00:22:45.589 --rc genhtml_legend=1 00:22:45.589 --rc geninfo_all_blocks=1 00:22:45.589 --rc geninfo_unexecuted_blocks=1 00:22:45.589 00:22:45.589 ' 00:22:45.589 20:50:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.589 20:50:40 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.589 20:50:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.589 20:50:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.589 20:50:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.589 20:50:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:45.589 20:50:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.589 20:50:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.590 20:50:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:45.590 20:50:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:45.590 20:50:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:45.590 20:50:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:45.590 20:50:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.590 20:50:40 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.590 20:50:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:45.590 20:50:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:45.849 Cannot find device "nvmf_init_br" 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:45.849 Cannot find device "nvmf_init_br2" 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:45.849 Cannot find device "nvmf_tgt_br" 00:22:45.849 20:50:40 nvmf_dif -- nvmf/common.sh@164 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.850 Cannot find device "nvmf_tgt_br2" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@165 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:45.850 Cannot find device "nvmf_init_br" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@166 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:45.850 Cannot find device "nvmf_init_br2" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@167 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:45.850 Cannot find device "nvmf_tgt_br" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@168 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:45.850 Cannot find device "nvmf_tgt_br2" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@169 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:45.850 Cannot find device "nvmf_br" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@170 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:45.850 Cannot find device "nvmf_init_if" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@171 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:45.850 Cannot find device "nvmf_init_if2" 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@172 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@173 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@174 -- # true 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:45.850 20:50:40 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:46.118 20:50:40 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:46.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:46.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:22:46.118 00:22:46.118 --- 10.0.0.3 ping statistics --- 00:22:46.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.118 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:46.118 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:46.118 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:22:46.118 00:22:46.118 --- 10.0.0.4 ping statistics --- 00:22:46.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.118 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:46.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:46.118 00:22:46.118 --- 10.0.0.1 ping statistics --- 00:22:46.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.118 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:46.118 20:50:41 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:46.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:22:46.118 00:22:46.118 --- 10.0.0.2 ping statistics --- 00:22:46.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.119 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:46.119 20:50:41 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.119 20:50:41 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:22:46.119 20:50:41 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:46.119 20:50:41 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:46.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:46.698 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:46.698 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.698 20:50:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:46.698 20:50:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83527 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83527 00:22:46.698 20:50:41 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83527 ']' 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.698 20:50:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:46.698 [2024-11-26 20:50:41.645502] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:46.698 [2024-11-26 20:50:41.645603] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.957 [2024-11-26 20:50:41.806753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.957 [2024-11-26 20:50:41.891703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.957 [2024-11-26 20:50:41.891773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.957 [2024-11-26 20:50:41.891789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.957 [2024-11-26 20:50:41.891803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.957 [2024-11-26 20:50:41.891814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.957 [2024-11-26 20:50:41.892281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.216 [2024-11-26 20:50:41.973428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:22:47.784 20:50:42 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 20:50:42 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.784 20:50:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:47.784 20:50:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 [2024-11-26 20:50:42.732629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.784 20:50:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.784 20:50:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 ************************************ 00:22:47.784 START TEST fio_dif_1_default 00:22:47.784 ************************************ 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 bdev_null0 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.784 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:48.044 [2024-11-26 20:50:42.776762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.044 { 00:22:48.044 "params": { 00:22:48.044 "name": "Nvme$subsystem", 00:22:48.044 "trtype": "$TEST_TRANSPORT", 00:22:48.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.044 "adrfam": "ipv4", 00:22:48.044 "trsvcid": "$NVMF_PORT", 00:22:48.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.044 "hdgst": ${hdgst:-false}, 00:22:48.044 "ddgst": ${ddgst:-false} 00:22:48.044 }, 00:22:48.044 "method": "bdev_nvme_attach_controller" 00:22:48.044 } 00:22:48.044 EOF 00:22:48.044 )") 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:22:48.044 20:50:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:48.044 "params": { 00:22:48.044 "name": "Nvme0", 00:22:48.044 "trtype": "tcp", 00:22:48.044 "traddr": "10.0.0.3", 00:22:48.044 "adrfam": "ipv4", 00:22:48.044 "trsvcid": "4420", 00:22:48.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:48.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:48.044 "hdgst": false, 00:22:48.044 "ddgst": false 00:22:48.045 }, 00:22:48.045 "method": "bdev_nvme_attach_controller" 00:22:48.045 }' 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:48.045 20:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:48.045 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:48.045 fio-3.35 00:22:48.045 Starting 1 thread 00:23:00.248 00:23:00.248 filename0: (groupid=0, jobs=1): err= 0: pid=83595: Tue Nov 26 20:50:53 2024 00:23:00.248 read: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(457MiB/10001msec) 00:23:00.248 slat (nsec): min=5768, max=89010, avg=6264.48, stdev=1148.24 00:23:00.248 clat (usec): min=297, max=3908, avg=324.94, stdev=27.28 00:23:00.248 lat (usec): min=303, max=3943, avg=331.20, stdev=27.56 00:23:00.248 clat percentiles (usec): 00:23:00.248 | 1.00th=[ 306], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:23:00.248 | 30.00th=[ 318], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:23:00.248 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 343], 95.00th=[ 351], 00:23:00.248 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 412], 99.95th=[ 437], 00:23:00.248 | 99.99th=[ 545] 00:23:00.248 bw ( KiB/s): min=44064, max=47264, per=100.00%, avg=46816.00, stdev=690.37, samples=19 00:23:00.248 iops : min=11016, max=11816, avg=11704.00, stdev=172.59, samples=19 00:23:00.248 lat (usec) : 500=99.97%, 750=0.02% 00:23:00.248 lat (msec) : 2=0.01%, 4=0.01% 00:23:00.248 cpu : usr=81.75%, sys=16.88%, ctx=136, majf=0, minf=9 00:23:00.248 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:00.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.248 issued rwts: total=116920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.248 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:00.248 00:23:00.248 Run status group 0 (all jobs): 00:23:00.248 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=457MiB (479MB), run=10001-10001msec 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 00:23:00.248 real 0m11.078s 00:23:00.248 user 0m8.862s 00:23:00.248 sys 0m2.021s 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 ************************************ 00:23:00.248 END TEST fio_dif_1_default 00:23:00.248 ************************************ 00:23:00.248 20:50:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:00.248 20:50:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:00.248 20:50:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 ************************************ 00:23:00.248 START TEST fio_dif_1_multi_subsystems 00:23:00.248 ************************************ 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 bdev_null0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 [2024-11-26 20:50:53.917828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 bdev_null1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.248 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.249 { 00:23:00.249 "params": { 00:23:00.249 "name": "Nvme$subsystem", 00:23:00.249 "trtype": "$TEST_TRANSPORT", 00:23:00.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.249 "adrfam": "ipv4", 00:23:00.249 "trsvcid": "$NVMF_PORT", 00:23:00.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.249 "hdgst": ${hdgst:-false}, 00:23:00.249 "ddgst": ${ddgst:-false} 00:23:00.249 }, 00:23:00.249 "method": "bdev_nvme_attach_controller" 00:23:00.249 } 00:23:00.249 EOF 00:23:00.249 )") 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.249 { 00:23:00.249 "params": { 00:23:00.249 "name": "Nvme$subsystem", 00:23:00.249 "trtype": "$TEST_TRANSPORT", 00:23:00.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.249 "adrfam": "ipv4", 00:23:00.249 "trsvcid": "$NVMF_PORT", 00:23:00.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.249 "hdgst": ${hdgst:-false}, 00:23:00.249 "ddgst": ${ddgst:-false} 00:23:00.249 }, 00:23:00.249 "method": "bdev_nvme_attach_controller" 00:23:00.249 } 00:23:00.249 EOF 00:23:00.249 )") 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:00.249 "params": { 00:23:00.249 "name": "Nvme0", 00:23:00.249 "trtype": "tcp", 00:23:00.249 "traddr": "10.0.0.3", 00:23:00.249 "adrfam": "ipv4", 00:23:00.249 "trsvcid": "4420", 00:23:00.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:00.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:00.249 "hdgst": false, 00:23:00.249 "ddgst": false 00:23:00.249 }, 00:23:00.249 "method": "bdev_nvme_attach_controller" 00:23:00.249 },{ 00:23:00.249 "params": { 00:23:00.249 "name": "Nvme1", 00:23:00.249 "trtype": "tcp", 00:23:00.249 "traddr": "10.0.0.3", 00:23:00.249 "adrfam": "ipv4", 00:23:00.249 "trsvcid": "4420", 00:23:00.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.249 "hdgst": false, 00:23:00.249 "ddgst": false 00:23:00.249 }, 00:23:00.249 "method": "bdev_nvme_attach_controller" 00:23:00.249 }' 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:00.249 20:50:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.249 20:50:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.249 20:50:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:00.249 20:50:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:00.249 20:50:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:00.249 20:50:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:00.249 20:50:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:00.249 20:50:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.249 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:00.249 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:00.249 fio-3.35 00:23:00.249 Starting 2 threads 00:23:10.214 00:23:10.214 filename0: (groupid=0, jobs=1): err= 0: pid=83759: Tue Nov 26 20:51:04 2024 00:23:10.214 read: IOPS=6166, BW=24.1MiB/s (25.3MB/s)(241MiB/10001msec) 00:23:10.214 slat (nsec): min=5912, max=39882, avg=10936.49, stdev=2813.26 00:23:10.214 clat (usec): min=324, max=1255, avg=619.94, stdev=33.47 00:23:10.214 lat (usec): min=330, max=1294, avg=630.87, stdev=34.47 00:23:10.214 clat percentiles (usec): 00:23:10.214 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 578], 20.00th=[ 594], 00:23:10.214 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 619], 60.00th=[ 627], 00:23:10.214 | 70.00th=[ 635], 80.00th=[ 644], 90.00th=[ 660], 95.00th=[ 676], 00:23:10.214 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 783], 00:23:10.214 | 99.99th=[ 816] 00:23:10.214 bw ( KiB/s): min=23664, max=24928, per=50.04%, avg=24682.95, stdev=284.82, samples=19 00:23:10.214 iops : min= 5916, max= 6232, avg=6170.74, stdev=71.21, samples=19 00:23:10.214 lat (usec) : 500=0.01%, 750=99.78%, 1000=0.20% 00:23:10.214 lat (msec) : 2=0.01% 00:23:10.214 cpu : usr=87.87%, sys=11.16%, ctx=24, majf=0, minf=0 00:23:10.214 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:10.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.214 issued rwts: total=61668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.214 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:10.214 filename1: (groupid=0, jobs=1): err= 0: pid=83760: Tue Nov 26 20:51:04 2024 00:23:10.214 read: IOPS=6166, BW=24.1MiB/s (25.3MB/s)(241MiB/10000msec) 00:23:10.214 slat (nsec): min=5915, max=42267, avg=10736.23, stdev=2793.30 00:23:10.214 clat (usec): min=364, max=1337, avg=620.31, stdev=28.72 00:23:10.214 lat (usec): min=370, max=1371, avg=631.04, stdev=29.15 00:23:10.214 clat percentiles (usec): 00:23:10.214 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 603], 00:23:10.214 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 619], 60.00th=[ 619], 00:23:10.214 | 70.00th=[ 627], 80.00th=[ 635], 90.00th=[ 652], 95.00th=[ 676], 00:23:10.214 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 766], 99.95th=[ 791], 00:23:10.214 | 99.99th=[ 1090] 00:23:10.214 bw ( KiB/s): min=23664, max=24928, per=50.04%, avg=24681.26, stdev=284.59, samples=19 00:23:10.214 iops : min= 5916, max= 6232, avg=6170.32, stdev=71.15, samples=19 00:23:10.214 lat (usec) : 500=0.03%, 750=99.77%, 1000=0.19% 00:23:10.214 lat (msec) : 2=0.01% 00:23:10.214 cpu : usr=87.88%, sys=11.13%, ctx=10, majf=0, minf=0 00:23:10.214 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:10.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.214 issued rwts: total=61660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.214 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:10.214 00:23:10.214 Run status group 0 (all jobs): 00:23:10.214 READ: bw=48.2MiB/s (50.5MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=482MiB (505MB), run=10000-10001msec 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 ************************************ 00:23:10.214 END TEST fio_dif_1_multi_subsystems 00:23:10.214 ************************************ 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.214 00:23:10.214 real 0m11.208s 00:23:10.214 user 0m18.358s 00:23:10.214 sys 0m2.571s 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 20:51:05 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:10.214 20:51:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:10.214 20:51:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 ************************************ 00:23:10.214 START TEST fio_dif_rand_params 00:23:10.214 ************************************ 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 bdev_null0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.215 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:10.215 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.215 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:10.215 [2024-11-26 20:51:05.198747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:10.215 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.215 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.474 { 00:23:10.474 "params": { 00:23:10.474 "name": "Nvme$subsystem", 00:23:10.474 "trtype": "$TEST_TRANSPORT", 00:23:10.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.474 "adrfam": "ipv4", 00:23:10.474 "trsvcid": "$NVMF_PORT", 00:23:10.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.474 "hdgst": ${hdgst:-false}, 00:23:10.474 "ddgst": ${ddgst:-false} 00:23:10.474 }, 00:23:10.474 "method": "bdev_nvme_attach_controller" 00:23:10.474 } 00:23:10.474 EOF 00:23:10.474 )") 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:10.474 "params": { 00:23:10.474 "name": "Nvme0", 00:23:10.474 "trtype": "tcp", 00:23:10.474 "traddr": "10.0.0.3", 00:23:10.474 "adrfam": "ipv4", 00:23:10.474 "trsvcid": "4420", 00:23:10.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:10.474 "hdgst": false, 00:23:10.474 "ddgst": false 00:23:10.474 }, 00:23:10.474 "method": "bdev_nvme_attach_controller" 00:23:10.474 }' 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:10.474 20:51:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:10.474 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:10.474 ... 00:23:10.474 fio-3.35 00:23:10.474 Starting 3 threads 00:23:17.042 00:23:17.042 filename0: (groupid=0, jobs=1): err= 0: pid=83918: Tue Nov 26 20:51:11 2024 00:23:17.042 read: IOPS=309, BW=38.6MiB/s (40.5MB/s)(194MiB/5007msec) 00:23:17.042 slat (nsec): min=6293, max=31837, avg=13370.67, stdev=2291.77 00:23:17.042 clat (usec): min=7061, max=12855, avg=9674.23, stdev=331.77 00:23:17.042 lat (usec): min=7075, max=12878, avg=9687.60, stdev=331.91 00:23:17.042 clat percentiles (usec): 00:23:17.042 | 1.00th=[ 9503], 5.00th=[ 9503], 10.00th=[ 9503], 20.00th=[ 9503], 00:23:17.042 | 30.00th=[ 9503], 40.00th=[ 9503], 50.00th=[ 9503], 60.00th=[ 9634], 00:23:17.042 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10290], 00:23:17.042 | 99.00th=[10552], 99.50th=[10683], 99.90th=[12911], 99.95th=[12911], 00:23:17.042 | 99.99th=[12911] 00:23:17.042 bw ( KiB/s): min=39089, max=39936, per=33.25%, avg=39500.56, stdev=413.85, samples=9 00:23:17.042 iops : min= 305, max= 312, avg=308.56, stdev= 3.28, samples=9 00:23:17.042 lat (msec) : 10=87.86%, 20=12.14% 00:23:17.042 cpu : usr=89.07%, sys=10.51%, ctx=6, majf=0, minf=0 00:23:17.042 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:17.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.042 issued rwts: total=1548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.042 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:17.042 filename0: (groupid=0, jobs=1): err= 0: pid=83919: Tue Nov 26 20:51:11 2024 00:23:17.042 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(194MiB/5005msec) 00:23:17.042 slat (nsec): min=5991, max=31406, avg=9459.40, stdev=3825.09 00:23:17.042 clat (usec): min=3359, max=11314, avg=9656.84, stdev=470.34 00:23:17.042 lat (usec): min=3372, max=11326, avg=9666.29, stdev=470.41 00:23:17.042 clat percentiles (usec): 00:23:17.042 | 1.00th=[ 9503], 5.00th=[ 9503], 10.00th=[ 9503], 20.00th=[ 9503], 00:23:17.042 | 30.00th=[ 9503], 40.00th=[ 9503], 50.00th=[ 9503], 60.00th=[ 9634], 00:23:17.042 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10290], 00:23:17.042 | 99.00th=[10421], 99.50th=[10552], 99.90th=[11338], 99.95th=[11338], 00:23:17.042 | 99.99th=[11338] 00:23:17.042 bw ( KiB/s): min=39168, max=39936, per=33.39%, avg=39671.11, stdev=378.22, samples=9 00:23:17.042 iops : min= 306, max= 312, avg=309.89, stdev= 2.93, samples=9 00:23:17.042 lat (msec) : 4=0.39%, 10=86.07%, 20=13.54% 00:23:17.042 cpu : usr=88.47%, sys=11.07%, ctx=13, majf=0, minf=0 00:23:17.042 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:17.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.042 issued rwts: total=1551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.042 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:17.043 filename0: (groupid=0, jobs=1): err= 0: pid=83920: Tue Nov 26 20:51:11 2024 00:23:17.043 read: IOPS=309, BW=38.6MiB/s (40.5MB/s)(194MiB/5007msec) 00:23:17.043 slat (nsec): min=6147, max=30142, avg=13674.68, stdev=2329.45 00:23:17.043 clat (usec): min=7065, max=12899, avg=9672.99, stdev=331.74 00:23:17.043 lat (usec): min=7078, max=12923, avg=9686.67, stdev=331.98 00:23:17.043 clat percentiles (usec): 00:23:17.043 | 1.00th=[ 9503], 5.00th=[ 9503], 10.00th=[ 9503], 20.00th=[ 9503], 00:23:17.043 | 30.00th=[ 9503], 40.00th=[ 9503], 50.00th=[ 9503], 60.00th=[ 9634], 00:23:17.043 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10290], 00:23:17.043 | 99.00th=[10552], 99.50th=[10683], 99.90th=[12911], 99.95th=[12911], 00:23:17.043 | 99.99th=[12911] 00:23:17.043 bw ( KiB/s): min=39089, max=39936, per=33.25%, avg=39500.56, stdev=413.85, samples=9 00:23:17.043 iops : min= 305, max= 312, avg=308.56, stdev= 3.28, samples=9 00:23:17.043 lat (msec) : 10=87.86%, 20=12.14% 00:23:17.043 cpu : usr=89.75%, sys=9.79%, ctx=9, majf=0, minf=0 00:23:17.043 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:17.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:17.043 issued rwts: total=1548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:17.043 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:17.043 00:23:17.043 Run status group 0 (all jobs): 00:23:17.043 READ: bw=116MiB/s (122MB/s), 38.6MiB/s-38.7MiB/s (40.5MB/s-40.6MB/s), io=581MiB (609MB), run=5005-5007msec 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 bdev_null0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 [2024-11-26 20:51:11.321313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 bdev_null1 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 bdev_null2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.043 { 00:23:17.043 "params": { 00:23:17.043 "name": "Nvme$subsystem", 00:23:17.043 "trtype": "$TEST_TRANSPORT", 00:23:17.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.043 "adrfam": "ipv4", 00:23:17.043 "trsvcid": "$NVMF_PORT", 00:23:17.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.043 "hdgst": ${hdgst:-false}, 00:23:17.043 "ddgst": ${ddgst:-false} 00:23:17.043 }, 00:23:17.043 "method": "bdev_nvme_attach_controller" 00:23:17.043 } 00:23:17.043 EOF 00:23:17.043 )") 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:17.043 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.044 { 00:23:17.044 "params": { 00:23:17.044 "name": "Nvme$subsystem", 00:23:17.044 "trtype": "$TEST_TRANSPORT", 00:23:17.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.044 "adrfam": "ipv4", 00:23:17.044 "trsvcid": "$NVMF_PORT", 00:23:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.044 "hdgst": ${hdgst:-false}, 00:23:17.044 "ddgst": ${ddgst:-false} 00:23:17.044 }, 00:23:17.044 "method": "bdev_nvme_attach_controller" 00:23:17.044 } 00:23:17.044 EOF 00:23:17.044 )") 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.044 { 00:23:17.044 "params": { 00:23:17.044 "name": "Nvme$subsystem", 00:23:17.044 "trtype": "$TEST_TRANSPORT", 00:23:17.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.044 "adrfam": "ipv4", 00:23:17.044 "trsvcid": "$NVMF_PORT", 00:23:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.044 "hdgst": ${hdgst:-false}, 00:23:17.044 "ddgst": ${ddgst:-false} 00:23:17.044 }, 00:23:17.044 "method": "bdev_nvme_attach_controller" 00:23:17.044 } 00:23:17.044 EOF 00:23:17.044 )") 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:17.044 "params": { 00:23:17.044 "name": "Nvme0", 00:23:17.044 "trtype": "tcp", 00:23:17.044 "traddr": "10.0.0.3", 00:23:17.044 "adrfam": "ipv4", 00:23:17.044 "trsvcid": "4420", 00:23:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:17.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:17.044 "hdgst": false, 00:23:17.044 "ddgst": false 00:23:17.044 }, 00:23:17.044 "method": "bdev_nvme_attach_controller" 00:23:17.044 },{ 00:23:17.044 "params": { 00:23:17.044 "name": "Nvme1", 00:23:17.044 "trtype": "tcp", 00:23:17.044 "traddr": "10.0.0.3", 00:23:17.044 "adrfam": "ipv4", 00:23:17.044 "trsvcid": "4420", 00:23:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.044 "hdgst": false, 00:23:17.044 "ddgst": false 00:23:17.044 }, 00:23:17.044 "method": "bdev_nvme_attach_controller" 00:23:17.044 },{ 00:23:17.044 "params": { 00:23:17.044 "name": "Nvme2", 00:23:17.044 "trtype": "tcp", 00:23:17.044 "traddr": "10.0.0.3", 00:23:17.044 "adrfam": "ipv4", 00:23:17.044 "trsvcid": "4420", 00:23:17.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:17.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:17.044 "hdgst": false, 00:23:17.044 "ddgst": false 00:23:17.044 }, 00:23:17.044 "method": "bdev_nvme_attach_controller" 00:23:17.044 }' 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:17.044 20:51:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:17.044 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:17.044 ... 00:23:17.044 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:17.044 ... 00:23:17.044 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:17.044 ... 00:23:17.044 fio-3.35 00:23:17.044 Starting 24 threads 00:23:29.243 00:23:29.243 filename0: (groupid=0, jobs=1): err= 0: pid=84015: Tue Nov 26 20:51:22 2024 00:23:29.243 read: IOPS=216, BW=865KiB/s (886kB/s)(8696KiB/10051msec) 00:23:29.243 slat (usec): min=3, max=4022, avg=14.70, stdev=86.28 00:23:29.243 clat (usec): min=876, max=133876, avg=73753.58, stdev=30125.03 00:23:29.243 lat (usec): min=883, max=133895, avg=73768.28, stdev=30127.82 00:23:29.243 clat percentiles (usec): 00:23:29.243 | 1.00th=[ 1237], 5.00th=[ 1369], 10.00th=[ 3621], 20.00th=[ 56886], 00:23:29.243 | 30.00th=[ 64226], 40.00th=[ 77071], 50.00th=[ 84411], 60.00th=[ 87557], 00:23:29.243 | 70.00th=[ 92799], 80.00th=[ 95945], 90.00th=[102237], 95.00th=[107480], 00:23:29.243 | 99.00th=[113771], 99.50th=[119014], 99.90th=[131597], 99.95th=[133694], 00:23:29.243 | 99.99th=[133694] 00:23:29.243 bw ( KiB/s): min= 640, max= 2944, per=4.46%, avg=864.95, stdev=491.97, samples=20 00:23:29.243 iops : min= 160, max= 736, avg=216.20, stdev=123.00, samples=20 00:23:29.243 lat (usec) : 1000=0.09% 00:23:29.243 lat (msec) : 2=6.53%, 4=3.59%, 10=0.09%, 50=4.46%, 100=72.86% 00:23:29.243 lat (msec) : 250=12.37% 00:23:29.243 cpu : usr=45.24%, sys=3.02%, ctx=1367, majf=0, minf=0 00:23:29.243 IO depths : 1=0.5%, 2=2.3%, 4=7.3%, 8=74.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:29.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.243 complete : 0=0.0%, 4=89.6%, 8=8.8%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.243 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.243 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.243 filename0: (groupid=0, jobs=1): err= 0: pid=84016: Tue Nov 26 20:51:22 2024 00:23:29.243 read: IOPS=204, BW=819KiB/s (838kB/s)(8208KiB/10024msec) 00:23:29.243 slat (usec): min=4, max=8015, avg=20.04, stdev=176.70 00:23:29.243 clat (msec): min=25, max=132, avg=78.01, stdev=18.34 00:23:29.243 lat (msec): min=25, max=132, avg=78.03, stdev=18.34 00:23:29.243 clat percentiles (msec): 00:23:29.243 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:23:29.243 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 85], 00:23:29.243 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 107], 00:23:29.243 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 123], 99.95th=[ 131], 00:23:29.243 | 99.99th=[ 133] 00:23:29.243 bw ( KiB/s): min= 712, max= 1032, per=4.22%, avg=817.45, stdev=62.39, samples=20 00:23:29.243 iops : min= 178, max= 258, avg=204.30, stdev=15.63, samples=20 00:23:29.243 lat (msec) : 50=6.97%, 100=83.48%, 250=9.55% 00:23:29.243 cpu : usr=32.12%, sys=2.32%, ctx=877, majf=0, minf=9 00:23:29.243 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:29.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.243 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.243 issued rwts: total=2052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename0: (groupid=0, jobs=1): err= 0: pid=84017: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=203, BW=812KiB/s (832kB/s)(8128KiB/10008msec) 00:23:29.244 slat (usec): min=2, max=10040, avg=25.36, stdev=284.39 00:23:29.244 clat (msec): min=8, max=129, avg=78.67, stdev=18.67 00:23:29.244 lat (msec): min=8, max=129, avg=78.70, stdev=18.68 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:23:29.244 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 86], 00:23:29.244 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 107], 00:23:29.244 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 130], 00:23:29.244 | 99.99th=[ 130] 00:23:29.244 bw ( KiB/s): min= 648, max= 920, per=4.17%, avg=808.85, stdev=64.22, samples=20 00:23:29.244 iops : min= 162, max= 230, avg=202.20, stdev=16.04, samples=20 00:23:29.244 lat (msec) : 10=0.15%, 50=7.43%, 100=81.59%, 250=10.83% 00:23:29.244 cpu : usr=30.83%, sys=2.35%, ctx=872, majf=0, minf=9 00:23:29.244 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename0: (groupid=0, jobs=1): err= 0: pid=84018: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=194, BW=778KiB/s (796kB/s)(7800KiB/10029msec) 00:23:29.244 slat (usec): min=4, max=8029, avg=23.86, stdev=256.50 00:23:29.244 clat (msec): min=36, max=135, avg=82.12, stdev=17.76 00:23:29.244 lat (msec): min=36, max=135, avg=82.15, stdev=17.77 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 59], 20.00th=[ 63], 00:23:29.244 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 90], 00:23:29.244 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:23:29.244 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:23:29.244 | 99.99th=[ 136] 00:23:29.244 bw ( KiB/s): min= 640, max= 1024, per=4.00%, avg=776.00, stdev=76.29, samples=20 00:23:29.244 iops : min= 160, max= 256, avg=194.00, stdev=19.07, samples=20 00:23:29.244 lat (msec) : 50=5.38%, 100=80.41%, 250=14.21% 00:23:29.244 cpu : usr=33.74%, sys=2.52%, ctx=989, majf=0, minf=9 00:23:29.244 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename0: (groupid=0, jobs=1): err= 0: pid=84019: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=209, BW=839KiB/s (859kB/s)(8392KiB/10003msec) 00:23:29.244 slat (usec): min=4, max=8020, avg=21.90, stdev=182.39 00:23:29.244 clat (msec): min=3, max=122, avg=76.18, stdev=19.87 00:23:29.244 lat (msec): min=3, max=122, avg=76.21, stdev=19.87 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 61], 00:23:29.244 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 85], 00:23:29.244 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 100], 95.00th=[ 105], 00:23:29.244 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 118], 99.95th=[ 118], 00:23:29.244 | 99.99th=[ 124] 00:23:29.244 bw ( KiB/s): min= 696, max= 880, per=4.24%, avg=821.05, stdev=45.51, samples=19 00:23:29.244 iops : min= 174, max= 220, avg=205.26, stdev=11.38, samples=19 00:23:29.244 lat (msec) : 4=0.33%, 10=1.05%, 50=6.20%, 100=82.65%, 250=9.77% 00:23:29.244 cpu : usr=37.18%, sys=2.65%, ctx=1131, majf=0, minf=9 00:23:29.244 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename0: (groupid=0, jobs=1): err= 0: pid=84020: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=205, BW=823KiB/s (843kB/s)(8264KiB/10040msec) 00:23:29.244 slat (nsec): min=5532, max=53861, avg=15053.21, stdev=6659.29 00:23:29.244 clat (msec): min=24, max=134, avg=77.60, stdev=18.91 00:23:29.244 lat (msec): min=24, max=134, avg=77.61, stdev=18.91 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 59], 00:23:29.244 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:23:29.244 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 107], 00:23:29.244 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 123], 99.95th=[ 131], 00:23:29.244 | 99.99th=[ 134] 00:23:29.244 bw ( KiB/s): min= 712, max= 1216, per=4.25%, avg=822.20, stdev=104.35, samples=20 00:23:29.244 iops : min= 178, max= 304, avg=205.50, stdev=26.10, samples=20 00:23:29.244 lat (msec) : 50=8.33%, 100=81.75%, 250=9.92% 00:23:29.244 cpu : usr=43.87%, sys=3.50%, ctx=1524, majf=0, minf=9 00:23:29.244 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 issued rwts: total=2066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename0: (groupid=0, jobs=1): err= 0: pid=84021: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=212, BW=851KiB/s (872kB/s)(8516KiB/10005msec) 00:23:29.244 slat (usec): min=2, max=4019, avg=17.45, stdev=87.05 00:23:29.244 clat (msec): min=7, max=120, avg=75.11, stdev=19.61 00:23:29.244 lat (msec): min=7, max=120, avg=75.13, stdev=19.61 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 25], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 59], 00:23:29.244 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 84], 00:23:29.244 | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 100], 95.00th=[ 105], 00:23:29.244 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 121], 00:23:29.244 | 99.99th=[ 122] 00:23:29.244 bw ( KiB/s): min= 744, max= 1010, per=4.33%, avg=838.42, stdev=53.13, samples=19 00:23:29.244 iops : min= 186, max= 252, avg=209.58, stdev=13.19, samples=19 00:23:29.244 lat (msec) : 10=0.75%, 50=10.57%, 100=80.27%, 250=8.41% 00:23:29.244 cpu : usr=32.15%, sys=2.53%, ctx=998, majf=0, minf=9 00:23:29.244 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 issued rwts: total=2129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename0: (groupid=0, jobs=1): err= 0: pid=84022: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=198, BW=796KiB/s (815kB/s)(7988KiB/10036msec) 00:23:29.244 slat (usec): min=3, max=8028, avg=22.89, stdev=253.35 00:23:29.244 clat (msec): min=39, max=129, avg=80.20, stdev=18.16 00:23:29.244 lat (msec): min=39, max=129, avg=80.22, stdev=18.15 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:23:29.244 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 87], 00:23:29.244 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:23:29.244 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 129], 99.95th=[ 130], 00:23:29.244 | 99.99th=[ 130] 00:23:29.244 bw ( KiB/s): min= 728, max= 896, per=4.10%, avg=794.80, stdev=50.08, samples=20 00:23:29.244 iops : min= 182, max= 224, avg=198.70, stdev=12.52, samples=20 00:23:29.244 lat (msec) : 50=6.66%, 100=80.87%, 250=12.47% 00:23:29.244 cpu : usr=31.76%, sys=2.48%, ctx=1025, majf=0, minf=9 00:23:29.244 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename1: (groupid=0, jobs=1): err= 0: pid=84023: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=201, BW=806KiB/s (826kB/s)(8096KiB/10040msec) 00:23:29.244 slat (usec): min=6, max=8030, avg=20.69, stdev=199.14 00:23:29.244 clat (msec): min=23, max=133, avg=79.18, stdev=18.93 00:23:29.244 lat (msec): min=23, max=133, avg=79.20, stdev=18.93 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 27], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 62], 00:23:29.244 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 87], 00:23:29.244 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 106], 00:23:29.244 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 127], 99.95th=[ 132], 00:23:29.244 | 99.99th=[ 134] 00:23:29.244 bw ( KiB/s): min= 736, max= 1216, per=4.14%, avg=802.60, stdev=102.29, samples=20 00:23:29.244 iops : min= 184, max= 304, avg=200.55, stdev=25.58, samples=20 00:23:29.244 lat (msec) : 50=8.35%, 100=80.83%, 250=10.82% 00:23:29.244 cpu : usr=38.78%, sys=2.27%, ctx=1212, majf=0, minf=9 00:23:29.244 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:23:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.244 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.244 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.244 filename1: (groupid=0, jobs=1): err= 0: pid=84024: Tue Nov 26 20:51:22 2024 00:23:29.244 read: IOPS=196, BW=786KiB/s (805kB/s)(7900KiB/10050msec) 00:23:29.244 slat (usec): min=3, max=8044, avg=19.13, stdev=202.04 00:23:29.244 clat (msec): min=2, max=132, avg=81.19, stdev=21.87 00:23:29.244 lat (msec): min=2, max=132, avg=81.21, stdev=21.86 00:23:29.244 clat percentiles (msec): 00:23:29.244 | 1.00th=[ 4], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:23:29.244 | 30.00th=[ 71], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 90], 00:23:29.244 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:23:29.244 | 99.00th=[ 127], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 132], 00:23:29.244 | 99.99th=[ 132] 00:23:29.244 bw ( KiB/s): min= 640, max= 1410, per=4.05%, avg=784.25, stdev=157.07, samples=20 00:23:29.244 iops : min= 160, max= 352, avg=196.00, stdev=39.17, samples=20 00:23:29.244 lat (msec) : 4=1.22%, 10=0.41%, 20=0.81%, 50=4.20%, 100=78.13% 00:23:29.244 lat (msec) : 250=15.24% 00:23:29.245 cpu : usr=37.44%, sys=2.24%, ctx=1186, majf=0, minf=0 00:23:29.245 IO depths : 1=0.2%, 2=1.6%, 4=6.0%, 8=76.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=89.3%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename1: (groupid=0, jobs=1): err= 0: pid=84025: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=195, BW=783KiB/s (802kB/s)(7860KiB/10040msec) 00:23:29.245 slat (usec): min=6, max=3045, avg=15.58, stdev=68.68 00:23:29.245 clat (msec): min=37, max=133, avg=81.58, stdev=17.51 00:23:29.245 lat (msec): min=37, max=133, avg=81.60, stdev=17.51 00:23:29.245 clat percentiles (msec): 00:23:29.245 | 1.00th=[ 43], 5.00th=[ 51], 10.00th=[ 58], 20.00th=[ 64], 00:23:29.245 | 30.00th=[ 70], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 90], 00:23:29.245 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 107], 00:23:29.245 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 134], 00:23:29.245 | 99.99th=[ 134] 00:23:29.245 bw ( KiB/s): min= 699, max= 1024, per=4.03%, avg=781.80, stdev=71.14, samples=20 00:23:29.245 iops : min= 174, max= 256, avg=195.40, stdev=17.84, samples=20 00:23:29.245 lat (msec) : 50=5.70%, 100=82.34%, 250=11.96% 00:23:29.245 cpu : usr=38.35%, sys=2.73%, ctx=1195, majf=0, minf=9 00:23:29.245 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.3%, 16=16.6%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=1965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename1: (groupid=0, jobs=1): err= 0: pid=84026: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=184, BW=738KiB/s (756kB/s)(7400KiB/10022msec) 00:23:29.245 slat (nsec): min=4666, max=65609, avg=14301.99, stdev=7212.01 00:23:29.245 clat (msec): min=24, max=140, avg=86.58, stdev=19.96 00:23:29.245 lat (msec): min=24, max=140, avg=86.60, stdev=19.96 00:23:29.245 clat percentiles (msec): 00:23:29.245 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 58], 20.00th=[ 68], 00:23:29.245 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 94], 00:23:29.245 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 115], 00:23:29.245 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 140], 00:23:29.245 | 99.99th=[ 140] 00:23:29.245 bw ( KiB/s): min= 528, max= 890, per=3.79%, avg=733.30, stdev=115.18, samples=20 00:23:29.245 iops : min= 132, max= 222, avg=183.30, stdev=28.76, samples=20 00:23:29.245 lat (msec) : 50=3.51%, 100=71.51%, 250=24.97% 00:23:29.245 cpu : usr=41.09%, sys=2.80%, ctx=1605, majf=0, minf=9 00:23:29.245 IO depths : 1=0.1%, 2=3.0%, 4=11.8%, 8=70.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=90.6%, 8=6.8%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=1850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename1: (groupid=0, jobs=1): err= 0: pid=84027: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=211, BW=846KiB/s (866kB/s)(8464KiB/10006msec) 00:23:29.245 slat (usec): min=2, max=8032, avg=34.19, stdev=369.11 00:23:29.245 clat (msec): min=4, max=117, avg=75.52, stdev=19.15 00:23:29.245 lat (msec): min=4, max=117, avg=75.55, stdev=19.16 00:23:29.245 clat percentiles (msec): 00:23:29.245 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 60], 00:23:29.245 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 85], 00:23:29.245 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 99], 95.00th=[ 106], 00:23:29.245 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 118], 00:23:29.245 | 99.99th=[ 118] 00:23:29.245 bw ( KiB/s): min= 768, max= 872, per=4.27%, avg=827.79, stdev=35.51, samples=19 00:23:29.245 iops : min= 192, max= 218, avg=206.95, stdev= 8.88, samples=19 00:23:29.245 lat (msec) : 10=1.18%, 50=7.04%, 100=83.84%, 250=7.94% 00:23:29.245 cpu : usr=38.84%, sys=2.63%, ctx=1166, majf=0, minf=9 00:23:29.245 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=87.0%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename1: (groupid=0, jobs=1): err= 0: pid=84028: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=205, BW=822KiB/s (842kB/s)(8236KiB/10016msec) 00:23:29.245 slat (usec): min=6, max=8022, avg=29.41, stdev=287.91 00:23:29.245 clat (msec): min=24, max=124, avg=77.70, stdev=17.80 00:23:29.245 lat (msec): min=24, max=124, avg=77.73, stdev=17.80 00:23:29.245 clat percentiles (msec): 00:23:29.245 | 1.00th=[ 42], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 61], 00:23:29.245 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 85], 00:23:29.245 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 106], 00:23:29.245 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 121], 00:23:29.245 | 99.99th=[ 126] 00:23:29.245 bw ( KiB/s): min= 760, max= 912, per=4.22%, avg=817.20, stdev=36.55, samples=20 00:23:29.245 iops : min= 190, max= 228, avg=204.30, stdev= 9.14, samples=20 00:23:29.245 lat (msec) : 50=5.73%, 100=84.07%, 250=10.20% 00:23:29.245 cpu : usr=40.82%, sys=3.03%, ctx=1336, majf=0, minf=9 00:23:29.245 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename1: (groupid=0, jobs=1): err= 0: pid=84029: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=201, BW=807KiB/s (827kB/s)(8092KiB/10024msec) 00:23:29.245 slat (usec): min=4, max=8027, avg=20.85, stdev=199.12 00:23:29.245 clat (msec): min=24, max=132, avg=79.17, stdev=17.70 00:23:29.245 lat (msec): min=24, max=132, avg=79.19, stdev=17.70 00:23:29.245 clat percentiles (msec): 00:23:29.245 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 61], 00:23:29.245 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 87], 00:23:29.245 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 106], 00:23:29.245 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 125], 99.95th=[ 133], 00:23:29.245 | 99.99th=[ 133] 00:23:29.245 bw ( KiB/s): min= 688, max= 1008, per=4.14%, avg=802.80, stdev=62.90, samples=20 00:23:29.245 iops : min= 172, max= 252, avg=200.70, stdev=15.72, samples=20 00:23:29.245 lat (msec) : 50=5.78%, 100=84.03%, 250=10.18% 00:23:29.245 cpu : usr=32.77%, sys=2.24%, ctx=970, majf=0, minf=9 00:23:29.245 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename1: (groupid=0, jobs=1): err= 0: pid=84030: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=204, BW=820KiB/s (840kB/s)(8236KiB/10046msec) 00:23:29.245 slat (usec): min=4, max=4026, avg=17.42, stdev=88.69 00:23:29.245 clat (msec): min=22, max=130, avg=77.89, stdev=18.93 00:23:29.245 lat (msec): min=22, max=130, avg=77.91, stdev=18.93 00:23:29.245 clat percentiles (msec): 00:23:29.245 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 61], 00:23:29.245 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 87], 00:23:29.245 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 105], 00:23:29.245 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 127], 99.95th=[ 129], 00:23:29.245 | 99.99th=[ 131] 00:23:29.245 bw ( KiB/s): min= 736, max= 1285, per=4.23%, avg=818.80, stdev=115.84, samples=20 00:23:29.245 iops : min= 184, max= 321, avg=204.60, stdev=28.92, samples=20 00:23:29.245 lat (msec) : 50=8.84%, 100=81.30%, 250=9.86% 00:23:29.245 cpu : usr=36.36%, sys=2.43%, ctx=1094, majf=0, minf=9 00:23:29.245 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename2: (groupid=0, jobs=1): err= 0: pid=84031: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=204, BW=818KiB/s (837kB/s)(8212KiB/10043msec) 00:23:29.245 slat (usec): min=4, max=8022, avg=18.14, stdev=176.84 00:23:29.245 clat (msec): min=22, max=131, avg=78.08, stdev=19.10 00:23:29.245 lat (msec): min=22, max=131, avg=78.10, stdev=19.10 00:23:29.245 clat percentiles (msec): 00:23:29.245 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:23:29.245 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 87], 00:23:29.245 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 107], 00:23:29.245 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 129], 99.95th=[ 131], 00:23:29.245 | 99.99th=[ 132] 00:23:29.245 bw ( KiB/s): min= 720, max= 1264, per=4.22%, avg=817.00, stdev=110.87, samples=20 00:23:29.245 iops : min= 180, max= 316, avg=204.15, stdev=27.75, samples=20 00:23:29.245 lat (msec) : 50=8.57%, 100=81.10%, 250=10.33% 00:23:29.245 cpu : usr=39.41%, sys=3.02%, ctx=1343, majf=0, minf=9 00:23:29.245 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:23:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.245 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.245 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.245 filename2: (groupid=0, jobs=1): err= 0: pid=84032: Tue Nov 26 20:51:22 2024 00:23:29.245 read: IOPS=185, BW=741KiB/s (759kB/s)(7440KiB/10040msec) 00:23:29.245 slat (nsec): min=6269, max=57686, avg=14918.23, stdev=7074.31 00:23:29.245 clat (msec): min=24, max=143, avg=86.22, stdev=17.60 00:23:29.246 lat (msec): min=24, max=143, avg=86.23, stdev=17.60 00:23:29.246 clat percentiles (msec): 00:23:29.246 | 1.00th=[ 28], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 72], 00:23:29.246 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 95], 00:23:29.246 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:23:29.246 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 138], 99.95th=[ 144], 00:23:29.246 | 99.99th=[ 144] 00:23:29.246 bw ( KiB/s): min= 624, max= 1149, per=3.80%, avg=736.85, stdev=103.29, samples=20 00:23:29.246 iops : min= 156, max= 287, avg=184.10, stdev=25.76, samples=20 00:23:29.246 lat (msec) : 50=4.14%, 100=80.70%, 250=15.16% 00:23:29.246 cpu : usr=31.75%, sys=2.11%, ctx=882, majf=0, minf=9 00:23:29.246 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=77.9%, 16=17.3%, 32=0.0%, >=64=0.0% 00:23:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 complete : 0=0.0%, 4=89.4%, 8=9.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.246 filename2: (groupid=0, jobs=1): err= 0: pid=84033: Tue Nov 26 20:51:22 2024 00:23:29.246 read: IOPS=197, BW=792KiB/s (811kB/s)(7944KiB/10036msec) 00:23:29.246 slat (usec): min=6, max=4038, avg=17.85, stdev=90.63 00:23:29.246 clat (msec): min=42, max=135, avg=80.68, stdev=17.78 00:23:29.246 lat (msec): min=42, max=135, avg=80.70, stdev=17.79 00:23:29.246 clat percentiles (msec): 00:23:29.246 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 62], 00:23:29.246 | 30.00th=[ 69], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 89], 00:23:29.246 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 103], 95.00th=[ 106], 00:23:29.246 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 136], 00:23:29.246 | 99.99th=[ 136] 00:23:29.246 bw ( KiB/s): min= 640, max= 1024, per=4.08%, avg=790.40, stdev=80.82, samples=20 00:23:29.246 iops : min= 160, max= 256, avg=197.60, stdev=20.21, samples=20 00:23:29.246 lat (msec) : 50=4.68%, 100=82.43%, 250=12.89% 00:23:29.246 cpu : usr=41.47%, sys=2.98%, ctx=1683, majf=0, minf=9 00:23:29.246 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.246 filename2: (groupid=0, jobs=1): err= 0: pid=84034: Tue Nov 26 20:51:22 2024 00:23:29.246 read: IOPS=185, BW=743KiB/s (761kB/s)(7460KiB/10037msec) 00:23:29.246 slat (nsec): min=3014, max=55235, avg=14762.38, stdev=6409.13 00:23:29.246 clat (msec): min=23, max=141, avg=85.95, stdev=19.11 00:23:29.246 lat (msec): min=23, max=141, avg=85.97, stdev=19.11 00:23:29.246 clat percentiles (msec): 00:23:29.246 | 1.00th=[ 43], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 71], 00:23:29.246 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 94], 00:23:29.246 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 115], 00:23:29.246 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 142], 00:23:29.246 | 99.99th=[ 142] 00:23:29.246 bw ( KiB/s): min= 544, max= 1040, per=3.82%, avg=739.60, stdev=95.63, samples=20 00:23:29.246 iops : min= 136, max= 260, avg=184.90, stdev=23.91, samples=20 00:23:29.246 lat (msec) : 50=4.29%, 100=74.53%, 250=21.18% 00:23:29.246 cpu : usr=31.87%, sys=2.00%, ctx=869, majf=0, minf=9 00:23:29.246 IO depths : 1=0.1%, 2=0.9%, 4=4.1%, 8=78.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:23:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 complete : 0=0.0%, 4=89.2%, 8=9.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 issued rwts: total=1865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.246 filename2: (groupid=0, jobs=1): err= 0: pid=84035: Tue Nov 26 20:51:22 2024 00:23:29.246 read: IOPS=207, BW=829KiB/s (849kB/s)(8288KiB/10002msec) 00:23:29.246 slat (usec): min=2, max=8023, avg=20.94, stdev=196.82 00:23:29.246 clat (msec): min=3, max=119, avg=77.14, stdev=20.02 00:23:29.246 lat (msec): min=3, max=119, avg=77.16, stdev=20.02 00:23:29.246 clat percentiles (msec): 00:23:29.246 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 60], 00:23:29.246 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:23:29.246 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 100], 95.00th=[ 106], 00:23:29.246 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 120], 00:23:29.246 | 99.99th=[ 120] 00:23:29.246 bw ( KiB/s): min= 640, max= 881, per=4.17%, avg=808.89, stdev=69.23, samples=19 00:23:29.246 iops : min= 160, max= 220, avg=202.21, stdev=17.29, samples=19 00:23:29.246 lat (msec) : 4=0.34%, 10=1.21%, 50=5.84%, 100=83.16%, 250=9.46% 00:23:29.246 cpu : usr=43.02%, sys=2.87%, ctx=1319, majf=0, minf=9 00:23:29.246 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=79.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.246 filename2: (groupid=0, jobs=1): err= 0: pid=84036: Tue Nov 26 20:51:22 2024 00:23:29.246 read: IOPS=209, BW=838KiB/s (858kB/s)(8396KiB/10025msec) 00:23:29.246 slat (usec): min=3, max=4025, avg=20.92, stdev=151.61 00:23:29.246 clat (msec): min=24, max=123, avg=76.32, stdev=18.58 00:23:29.246 lat (msec): min=24, max=123, avg=76.34, stdev=18.58 00:23:29.246 clat percentiles (msec): 00:23:29.246 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 59], 00:23:29.246 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 85], 00:23:29.246 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 99], 95.00th=[ 106], 00:23:29.246 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 123], 00:23:29.246 | 99.99th=[ 125] 00:23:29.246 bw ( KiB/s): min= 712, max= 1025, per=4.30%, avg=832.85, stdev=59.47, samples=20 00:23:29.246 iops : min= 178, max= 256, avg=208.20, stdev=14.82, samples=20 00:23:29.246 lat (msec) : 50=8.10%, 100=83.80%, 250=8.10% 00:23:29.246 cpu : usr=35.40%, sys=2.21%, ctx=1088, majf=0, minf=9 00:23:29.246 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.246 filename2: (groupid=0, jobs=1): err= 0: pid=84037: Tue Nov 26 20:51:22 2024 00:23:29.246 read: IOPS=208, BW=836KiB/s (856kB/s)(8364KiB/10010msec) 00:23:29.246 slat (usec): min=2, max=12050, avg=43.36, stdev=479.46 00:23:29.246 clat (msec): min=17, max=126, avg=76.38, stdev=18.04 00:23:29.246 lat (msec): min=17, max=126, avg=76.42, stdev=18.03 00:23:29.246 clat percentiles (msec): 00:23:29.246 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 60], 00:23:29.246 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 85], 00:23:29.246 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 100], 95.00th=[ 106], 00:23:29.246 | 99.00th=[ 109], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 127], 00:23:29.246 | 99.99th=[ 127] 00:23:29.246 bw ( KiB/s): min= 744, max= 1001, per=4.27%, avg=827.42, stdev=52.93, samples=19 00:23:29.246 iops : min= 186, max= 250, avg=206.84, stdev=13.19, samples=19 00:23:29.246 lat (msec) : 20=0.33%, 50=6.07%, 100=84.46%, 250=9.13% 00:23:29.246 cpu : usr=30.85%, sys=2.30%, ctx=863, majf=0, minf=9 00:23:29.246 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 issued rwts: total=2091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.246 filename2: (groupid=0, jobs=1): err= 0: pid=84038: Tue Nov 26 20:51:22 2024 00:23:29.246 read: IOPS=204, BW=817KiB/s (837kB/s)(8192KiB/10024msec) 00:23:29.246 slat (usec): min=2, max=8047, avg=22.75, stdev=250.71 00:23:29.246 clat (msec): min=24, max=132, avg=78.19, stdev=17.91 00:23:29.246 lat (msec): min=24, max=132, avg=78.21, stdev=17.90 00:23:29.246 clat percentiles (msec): 00:23:29.246 | 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 61], 00:23:29.246 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:23:29.246 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 106], 00:23:29.246 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 133], 00:23:29.246 | 99.99th=[ 133] 00:23:29.246 bw ( KiB/s): min= 712, max= 1008, per=4.19%, avg=812.80, stdev=62.37, samples=20 00:23:29.246 iops : min= 178, max= 252, avg=203.20, stdev=15.59, samples=20 00:23:29.246 lat (msec) : 50=5.47%, 100=83.01%, 250=11.52% 00:23:29.246 cpu : usr=42.10%, sys=2.93%, ctx=1178, majf=0, minf=9 00:23:29.246 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:29.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.246 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:29.246 00:23:29.246 Run status group 0 (all jobs): 00:23:29.246 READ: bw=18.9MiB/s (19.8MB/s), 738KiB/s-865KiB/s (756kB/s-886kB/s), io=190MiB (199MB), run=10002-10051msec 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.246 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 bdev_null0 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 [2024-11-26 20:51:22.737438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 bdev_null1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.247 { 00:23:29.247 "params": { 00:23:29.247 "name": "Nvme$subsystem", 00:23:29.247 "trtype": "$TEST_TRANSPORT", 00:23:29.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.247 "adrfam": "ipv4", 00:23:29.247 "trsvcid": "$NVMF_PORT", 00:23:29.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.247 "hdgst": ${hdgst:-false}, 00:23:29.247 "ddgst": ${ddgst:-false} 00:23:29.247 }, 00:23:29.247 "method": "bdev_nvme_attach_controller" 00:23:29.247 } 00:23:29.247 EOF 00:23:29.247 )") 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.247 { 00:23:29.247 "params": { 00:23:29.247 "name": "Nvme$subsystem", 00:23:29.247 "trtype": "$TEST_TRANSPORT", 00:23:29.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.247 "adrfam": "ipv4", 00:23:29.247 "trsvcid": "$NVMF_PORT", 00:23:29.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.247 "hdgst": ${hdgst:-false}, 00:23:29.247 "ddgst": ${ddgst:-false} 00:23:29.247 }, 00:23:29.247 "method": "bdev_nvme_attach_controller" 00:23:29.247 } 00:23:29.247 EOF 00:23:29.247 )") 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:29.247 20:51:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:29.247 "params": { 00:23:29.247 "name": "Nvme0", 00:23:29.247 "trtype": "tcp", 00:23:29.247 "traddr": "10.0.0.3", 00:23:29.248 "adrfam": "ipv4", 00:23:29.248 "trsvcid": "4420", 00:23:29.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:29.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:29.248 "hdgst": false, 00:23:29.248 "ddgst": false 00:23:29.248 }, 00:23:29.248 "method": "bdev_nvme_attach_controller" 00:23:29.248 },{ 00:23:29.248 "params": { 00:23:29.248 "name": "Nvme1", 00:23:29.248 "trtype": "tcp", 00:23:29.248 "traddr": "10.0.0.3", 00:23:29.248 "adrfam": "ipv4", 00:23:29.248 "trsvcid": "4420", 00:23:29.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.248 "hdgst": false, 00:23:29.248 "ddgst": false 00:23:29.248 }, 00:23:29.248 "method": "bdev_nvme_attach_controller" 00:23:29.248 }' 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:29.248 20:51:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:29.248 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:29.248 ... 00:23:29.248 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:29.248 ... 00:23:29.248 fio-3.35 00:23:29.248 Starting 4 threads 00:23:34.518 00:23:34.518 filename0: (groupid=0, jobs=1): err= 0: pid=84191: Tue Nov 26 20:51:28 2024 00:23:34.518 read: IOPS=2566, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:23:34.518 slat (nsec): min=3719, max=99669, avg=16189.41, stdev=9670.30 00:23:34.518 clat (usec): min=301, max=5805, avg=3068.39, stdev=780.50 00:23:34.518 lat (usec): min=310, max=5813, avg=3084.57, stdev=780.57 00:23:34.518 clat percentiles (usec): 00:23:34.518 | 1.00th=[ 1516], 5.00th=[ 1762], 10.00th=[ 1942], 20.00th=[ 2212], 00:23:34.518 | 30.00th=[ 2573], 40.00th=[ 2966], 50.00th=[ 3130], 60.00th=[ 3392], 00:23:34.518 | 70.00th=[ 3654], 80.00th=[ 3818], 90.00th=[ 3982], 95.00th=[ 4113], 00:23:34.518 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 4948], 99.95th=[ 5080], 00:23:34.518 | 99.99th=[ 5604] 00:23:34.518 bw ( KiB/s): min=18880, max=21840, per=25.47%, avg=20531.56, stdev=867.87, samples=9 00:23:34.518 iops : min= 2360, max= 2730, avg=2566.44, stdev=108.48, samples=9 00:23:34.518 lat (usec) : 500=0.05%, 1000=0.19% 00:23:34.518 lat (msec) : 2=10.91%, 4=80.66%, 10=8.19% 00:23:34.518 cpu : usr=93.34%, sys=5.98%, ctx=30, majf=0, minf=10 00:23:34.518 IO depths : 1=0.9%, 2=8.3%, 4=59.6%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.518 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.518 issued rwts: total=12834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.518 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:34.518 filename0: (groupid=0, jobs=1): err= 0: pid=84192: Tue Nov 26 20:51:28 2024 00:23:34.518 read: IOPS=2573, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:23:34.518 slat (nsec): min=4774, max=69529, avg=16943.42, stdev=9664.66 00:23:34.518 clat (usec): min=370, max=5530, avg=3057.59, stdev=772.41 00:23:34.518 lat (usec): min=380, max=5556, avg=3074.53, stdev=772.10 00:23:34.518 clat percentiles (usec): 00:23:34.518 | 1.00th=[ 1500], 5.00th=[ 1745], 10.00th=[ 1958], 20.00th=[ 2212], 00:23:34.518 | 30.00th=[ 2540], 40.00th=[ 2966], 50.00th=[ 3130], 60.00th=[ 3392], 00:23:34.518 | 70.00th=[ 3654], 80.00th=[ 3785], 90.00th=[ 3949], 95.00th=[ 4113], 00:23:34.518 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5145], 99.95th=[ 5473], 00:23:34.518 | 99.99th=[ 5538] 00:23:34.518 bw ( KiB/s): min=19696, max=21488, per=25.59%, avg=20631.11, stdev=591.03, samples=9 00:23:34.518 iops : min= 2462, max= 2686, avg=2578.89, stdev=73.88, samples=9 00:23:34.518 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.22% 00:23:34.518 lat (msec) : 2=10.99%, 4=81.58%, 10=7.19% 00:23:34.518 cpu : usr=93.86%, sys=5.48%, ctx=8, majf=0, minf=9 00:23:34.518 IO depths : 1=0.8%, 2=8.2%, 4=59.6%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.518 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.518 issued rwts: total=12872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.518 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:34.518 filename1: (groupid=0, jobs=1): err= 0: pid=84193: Tue Nov 26 20:51:28 2024 00:23:34.518 read: IOPS=2539, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5001msec) 00:23:34.518 slat (nsec): min=3450, max=69811, avg=17957.05, stdev=10145.97 00:23:34.518 clat (usec): min=291, max=5773, avg=3095.56, stdev=743.93 00:23:34.518 lat (usec): min=300, max=5780, avg=3113.51, stdev=743.64 00:23:34.518 clat percentiles (usec): 00:23:34.518 | 1.00th=[ 1549], 5.00th=[ 1778], 10.00th=[ 2024], 20.00th=[ 2311], 00:23:34.518 | 30.00th=[ 2704], 40.00th=[ 2999], 50.00th=[ 3195], 60.00th=[ 3490], 00:23:34.518 | 70.00th=[ 3687], 80.00th=[ 3785], 90.00th=[ 3916], 95.00th=[ 4047], 00:23:34.518 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 4948], 00:23:34.518 | 99.99th=[ 5145] 00:23:34.518 bw ( KiB/s): min=19728, max=21312, per=25.17%, avg=20290.67, stdev=589.89, samples=9 00:23:34.518 iops : min= 2466, max= 2664, avg=2536.33, stdev=73.74, samples=9 00:23:34.518 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.06% 00:23:34.518 lat (msec) : 2=9.39%, 4=83.81%, 10=6.70% 00:23:34.518 cpu : usr=93.66%, sys=5.68%, ctx=5, majf=0, minf=9 00:23:34.518 IO depths : 1=0.9%, 2=8.7%, 4=59.4%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.519 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.519 issued rwts: total=12702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.519 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:34.519 filename1: (groupid=0, jobs=1): err= 0: pid=84194: Tue Nov 26 20:51:28 2024 00:23:34.519 read: IOPS=2396, BW=18.7MiB/s (19.6MB/s)(93.6MiB/5001msec) 00:23:34.519 slat (nsec): min=5416, max=72510, avg=18249.31, stdev=10570.68 00:23:34.519 clat (usec): min=858, max=5883, avg=3281.30, stdev=804.28 00:23:34.519 lat (usec): min=865, max=5903, avg=3299.55, stdev=802.87 00:23:34.519 clat percentiles (usec): 00:23:34.519 | 1.00th=[ 1631], 5.00th=[ 1778], 10.00th=[ 2008], 20.00th=[ 2573], 00:23:34.519 | 30.00th=[ 2966], 40.00th=[ 3130], 50.00th=[ 3392], 60.00th=[ 3654], 00:23:34.519 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4359], 00:23:34.519 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5473], 00:23:34.519 | 99.99th=[ 5866] 00:23:34.519 bw ( KiB/s): min=18368, max=20480, per=23.94%, avg=19302.33, stdev=664.35, samples=9 00:23:34.519 iops : min= 2296, max= 2560, avg=2412.78, stdev=83.04, samples=9 00:23:34.519 lat (usec) : 1000=0.54% 00:23:34.519 lat (msec) : 2=9.20%, 4=72.65%, 10=17.61% 00:23:34.519 cpu : usr=93.04%, sys=6.04%, ctx=19, majf=0, minf=0 00:23:34.519 IO depths : 1=1.5%, 2=11.7%, 4=57.7%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.519 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.519 issued rwts: total=11985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.519 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:34.519 00:23:34.519 Run status group 0 (all jobs): 00:23:34.519 READ: bw=78.7MiB/s (82.5MB/s), 18.7MiB/s-20.1MiB/s (19.6MB/s-21.1MB/s), io=394MiB (413MB), run=5001-5001msec 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 ************************************ 00:23:34.519 END TEST fio_dif_rand_params 00:23:34.519 ************************************ 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 00:23:34.519 real 0m23.712s 00:23:34.519 user 2m3.507s 00:23:34.519 sys 0m9.998s 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 20:51:28 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:34.519 20:51:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:34.519 20:51:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 ************************************ 00:23:34.519 START TEST fio_dif_digest 00:23:34.519 ************************************ 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 bdev_null0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:34.519 [2024-11-26 20:51:28.958297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.519 { 00:23:34.519 "params": { 00:23:34.519 "name": "Nvme$subsystem", 00:23:34.519 "trtype": "$TEST_TRANSPORT", 00:23:34.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.519 "adrfam": "ipv4", 00:23:34.519 "trsvcid": "$NVMF_PORT", 00:23:34.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.519 "hdgst": ${hdgst:-false}, 00:23:34.519 "ddgst": ${ddgst:-false} 00:23:34.519 }, 00:23:34.519 "method": "bdev_nvme_attach_controller" 00:23:34.519 } 00:23:34.519 EOF 00:23:34.519 )") 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:34.519 "params": { 00:23:34.519 "name": "Nvme0", 00:23:34.519 "trtype": "tcp", 00:23:34.519 "traddr": "10.0.0.3", 00:23:34.519 "adrfam": "ipv4", 00:23:34.519 "trsvcid": "4420", 00:23:34.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:34.519 "hdgst": true, 00:23:34.519 "ddgst": true 00:23:34.519 }, 00:23:34.519 "method": "bdev_nvme_attach_controller" 00:23:34.519 }' 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.519 20:51:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:34.519 20:51:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:34.519 20:51:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:34.519 20:51:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.519 20:51:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.519 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:34.519 ... 00:23:34.519 fio-3.35 00:23:34.519 Starting 3 threads 00:23:46.721 00:23:46.721 filename0: (groupid=0, jobs=1): err= 0: pid=84296: Tue Nov 26 20:51:39 2024 00:23:46.721 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(320MiB/10009msec) 00:23:46.721 slat (nsec): min=6420, max=37679, avg=15647.71, stdev=3990.72 00:23:46.721 clat (usec): min=8595, max=13349, avg=11712.89, stdev=778.34 00:23:46.721 lat (usec): min=8602, max=13366, avg=11728.54, stdev=779.38 00:23:46.721 clat percentiles (usec): 00:23:46.721 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:23:46.721 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:23:46.721 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12780], 95.00th=[12911], 00:23:46.721 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:23:46.721 | 99.99th=[13304] 00:23:46.721 bw ( KiB/s): min=29952, max=34560, per=33.43%, avg=32781.47, stdev=1933.65, samples=19 00:23:46.721 iops : min= 234, max= 270, avg=256.11, stdev=15.11, samples=19 00:23:46.721 lat (msec) : 10=0.23%, 20=99.77% 00:23:46.721 cpu : usr=89.24%, sys=10.31%, ctx=28, majf=0, minf=0 00:23:46.721 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.721 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.721 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:46.721 filename0: (groupid=0, jobs=1): err= 0: pid=84297: Tue Nov 26 20:51:39 2024 00:23:46.721 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(320MiB/10001msec) 00:23:46.721 slat (nsec): min=6230, max=41274, avg=15137.69, stdev=4665.97 00:23:46.721 clat (usec): min=4016, max=13343, avg=11704.23, stdev=832.60 00:23:46.721 lat (usec): min=4036, max=13362, avg=11719.37, stdev=833.45 00:23:46.721 clat percentiles (usec): 00:23:46.721 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:23:46.721 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:23:46.721 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12780], 95.00th=[12911], 00:23:46.721 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13304], 99.95th=[13304], 00:23:46.721 | 99.99th=[13304] 00:23:46.721 bw ( KiB/s): min=29952, max=35328, per=33.47%, avg=32821.89, stdev=1996.83, samples=19 00:23:46.721 iops : min= 234, max= 276, avg=256.42, stdev=15.60, samples=19 00:23:46.721 lat (msec) : 10=0.23%, 20=99.77% 00:23:46.721 cpu : usr=89.10%, sys=10.45%, ctx=45, majf=0, minf=0 00:23:46.721 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.721 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.721 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:46.721 filename0: (groupid=0, jobs=1): err= 0: pid=84298: Tue Nov 26 20:51:39 2024 00:23:46.721 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(320MiB/10009msec) 00:23:46.721 slat (nsec): min=6555, max=43378, avg=15864.70, stdev=4232.65 00:23:46.721 clat (usec): min=8629, max=13338, avg=11712.10, stdev=778.15 00:23:46.721 lat (usec): min=8636, max=13353, avg=11727.97, stdev=779.25 00:23:46.721 clat percentiles (usec): 00:23:46.721 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:23:46.721 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:23:46.721 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12780], 95.00th=[12911], 00:23:46.721 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:23:46.721 | 99.99th=[13304] 00:23:46.721 bw ( KiB/s): min=29952, max=34560, per=33.43%, avg=32781.47, stdev=1933.65, samples=19 00:23:46.721 iops : min= 234, max= 270, avg=256.11, stdev=15.11, samples=19 00:23:46.721 lat (msec) : 10=0.23%, 20=99.77% 00:23:46.721 cpu : usr=88.82%, sys=10.76%, ctx=13, majf=0, minf=0 00:23:46.721 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.721 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.721 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:46.721 00:23:46.721 Run status group 0 (all jobs): 00:23:46.721 READ: bw=95.8MiB/s (100MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=959MiB (1005MB), run=10001-10009msec 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.721 20:51:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:46.721 20:51:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.721 00:23:46.721 real 0m11.079s 00:23:46.721 user 0m27.407s 00:23:46.721 sys 0m3.479s 00:23:46.721 20:51:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.721 20:51:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:46.721 ************************************ 00:23:46.721 END TEST fio_dif_digest 00:23:46.721 ************************************ 00:23:46.721 20:51:40 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:46.721 20:51:40 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.721 rmmod nvme_tcp 00:23:46.721 rmmod nvme_fabrics 00:23:46.721 rmmod nvme_keyring 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83527 ']' 00:23:46.721 20:51:40 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83527 00:23:46.721 20:51:40 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83527 ']' 00:23:46.721 20:51:40 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83527 00:23:46.721 20:51:40 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:23:46.721 20:51:40 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.722 20:51:40 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83527 00:23:46.722 20:51:40 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.722 20:51:40 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.722 killing process with pid 83527 00:23:46.722 20:51:40 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83527' 00:23:46.722 20:51:40 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83527 00:23:46.722 20:51:40 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83527 00:23:46.722 20:51:40 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:46.722 20:51:40 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:46.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:46.722 Waiting for block devices as requested 00:23:46.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:46.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.722 20:51:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:46.722 20:51:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.722 20:51:41 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:23:46.722 00:23:46.722 real 1m1.215s 00:23:46.722 user 3m46.678s 00:23:46.722 sys 0m23.559s 00:23:46.722 20:51:41 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.722 20:51:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.722 ************************************ 00:23:46.722 END TEST nvmf_dif 00:23:46.722 ************************************ 00:23:46.722 20:51:41 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:46.722 20:51:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.722 20:51:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.722 20:51:41 -- common/autotest_common.sh@10 -- # set +x 00:23:46.722 ************************************ 00:23:46.722 START TEST nvmf_abort_qd_sizes 00:23:46.722 ************************************ 00:23:46.722 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:46.722 * Looking for test storage... 00:23:46.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:46.722 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:46.722 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:23:46.722 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:46.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.981 --rc genhtml_branch_coverage=1 00:23:46.981 --rc genhtml_function_coverage=1 00:23:46.981 --rc genhtml_legend=1 00:23:46.981 --rc geninfo_all_blocks=1 00:23:46.981 --rc geninfo_unexecuted_blocks=1 00:23:46.981 00:23:46.981 ' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:46.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.981 --rc genhtml_branch_coverage=1 00:23:46.981 --rc genhtml_function_coverage=1 00:23:46.981 --rc genhtml_legend=1 00:23:46.981 --rc geninfo_all_blocks=1 00:23:46.981 --rc geninfo_unexecuted_blocks=1 00:23:46.981 00:23:46.981 ' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:46.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.981 --rc genhtml_branch_coverage=1 00:23:46.981 --rc genhtml_function_coverage=1 00:23:46.981 --rc genhtml_legend=1 00:23:46.981 --rc geninfo_all_blocks=1 00:23:46.981 --rc geninfo_unexecuted_blocks=1 00:23:46.981 00:23:46.981 ' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:46.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.981 --rc genhtml_branch_coverage=1 00:23:46.981 --rc genhtml_function_coverage=1 00:23:46.981 --rc genhtml_legend=1 00:23:46.981 --rc geninfo_all_blocks=1 00:23:46.981 --rc geninfo_unexecuted_blocks=1 00:23:46.981 00:23:46.981 ' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:46.981 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:46.982 Cannot find device "nvmf_init_br" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:46.982 Cannot find device "nvmf_init_br2" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:46.982 Cannot find device "nvmf_tgt_br" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:46.982 Cannot find device "nvmf_tgt_br2" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:46.982 Cannot find device "nvmf_init_br" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:46.982 Cannot find device "nvmf_init_br2" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:46.982 Cannot find device "nvmf_tgt_br" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:46.982 Cannot find device "nvmf_tgt_br2" 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:23:46.982 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:47.240 Cannot find device "nvmf_br" 00:23:47.240 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:23:47.240 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:47.240 Cannot find device "nvmf_init_if" 00:23:47.240 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:23:47.240 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:47.240 Cannot find device "nvmf_init_if2" 00:23:47.240 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:23:47.240 20:51:41 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:47.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:47.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:47.240 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:47.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:47.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:23:47.499 00:23:47.499 --- 10.0.0.3 ping statistics --- 00:23:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.499 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:47.499 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:47.499 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:23:47.499 00:23:47.499 --- 10.0.0.4 ping statistics --- 00:23:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.499 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:47.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:47.499 00:23:47.499 --- 10.0.0.1 ping statistics --- 00:23:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.499 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:47.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:23:47.499 00:23:47.499 --- 10.0.0.2 ping statistics --- 00:23:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.499 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:47.499 20:51:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:48.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:48.325 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:48.325 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84961 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84961 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84961 ']' 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.325 20:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:48.584 [2024-11-26 20:51:43.347432] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:48.584 [2024-11-26 20:51:43.347534] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.584 [2024-11-26 20:51:43.510822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.844 [2024-11-26 20:51:43.599141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.844 [2024-11-26 20:51:43.599555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.844 [2024-11-26 20:51:43.599661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.844 [2024-11-26 20:51:43.599751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.844 [2024-11-26 20:51:43.599829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.844 [2024-11-26 20:51:43.601145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.844 [2024-11-26 20:51:43.601277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.844 [2024-11-26 20:51:43.601319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.844 [2024-11-26 20:51:43.601322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.844 [2024-11-26 20:51:43.655043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:23:49.780 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.781 20:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:49.781 ************************************ 00:23:49.781 START TEST spdk_target_abort 00:23:49.781 ************************************ 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.781 spdk_targetn1 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.781 [2024-11-26 20:51:44.572460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.781 [2024-11-26 20:51:44.616617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:49.781 20:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.085 Initializing NVMe Controllers 00:23:53.085 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:53.085 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:53.085 Initialization complete. Launching workers. 00:23:53.085 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11867, failed: 0 00:23:53.085 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1058, failed to submit 10809 00:23:53.085 success 828, unsuccessful 230, failed 0 00:23:53.085 20:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:53.085 20:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:56.363 Initializing NVMe Controllers 00:23:56.363 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:56.363 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:56.363 Initialization complete. Launching workers. 00:23:56.363 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:23:56.363 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1162, failed to submit 7790 00:23:56.363 success 365, unsuccessful 797, failed 0 00:23:56.363 20:51:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:56.363 20:51:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:59.648 Initializing NVMe Controllers 00:23:59.648 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:59.648 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:59.648 Initialization complete. Launching workers. 00:23:59.648 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31220, failed: 0 00:23:59.648 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2251, failed to submit 28969 00:23:59.648 success 467, unsuccessful 1784, failed 0 00:23:59.648 20:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:59.648 20:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.648 20:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:59.648 20:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.648 20:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:59.648 20:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.648 20:51:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84961 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84961 ']' 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84961 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84961 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84961' 00:24:00.213 killing process with pid 84961 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84961 00:24:00.213 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84961 00:24:00.471 00:24:00.471 real 0m10.880s 00:24:00.471 user 0m43.889s 00:24:00.471 sys 0m2.864s 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:00.471 ************************************ 00:24:00.471 END TEST spdk_target_abort 00:24:00.471 ************************************ 00:24:00.471 20:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:00.471 20:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.471 20:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.471 20:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:00.471 ************************************ 00:24:00.471 START TEST kernel_target_abort 00:24:00.471 ************************************ 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:00.471 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:00.729 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:00.729 20:51:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:00.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:00.985 Waiting for block devices as requested 00:24:00.985 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:01.243 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:01.243 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:01.244 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:01.502 No valid GPT data, bailing 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:01.502 No valid GPT data, bailing 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:01.502 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:01.503 No valid GPT data, bailing 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:01.503 No valid GPT data, bailing 00:24:01.503 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:01.761 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b --hostid=5b7a0101-ee75-44bd-b64f-b6a56d193f2b -a 10.0.0.1 -t tcp -s 4420 00:24:01.762 00:24:01.762 Discovery Log Number of Records 2, Generation counter 2 00:24:01.762 =====Discovery Log Entry 0====== 00:24:01.762 trtype: tcp 00:24:01.762 adrfam: ipv4 00:24:01.762 subtype: current discovery subsystem 00:24:01.762 treq: not specified, sq flow control disable supported 00:24:01.762 portid: 1 00:24:01.762 trsvcid: 4420 00:24:01.762 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:01.762 traddr: 10.0.0.1 00:24:01.762 eflags: none 00:24:01.762 sectype: none 00:24:01.762 =====Discovery Log Entry 1====== 00:24:01.762 trtype: tcp 00:24:01.762 adrfam: ipv4 00:24:01.762 subtype: nvme subsystem 00:24:01.762 treq: not specified, sq flow control disable supported 00:24:01.762 portid: 1 00:24:01.762 trsvcid: 4420 00:24:01.762 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:01.762 traddr: 10.0.0.1 00:24:01.762 eflags: none 00:24:01.762 sectype: none 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:01.762 20:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:05.049 Initializing NVMe Controllers 00:24:05.049 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:05.049 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:05.049 Initialization complete. Launching workers. 00:24:05.049 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40249, failed: 0 00:24:05.049 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40249, failed to submit 0 00:24:05.049 success 0, unsuccessful 40249, failed 0 00:24:05.050 20:51:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:05.050 20:51:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:08.338 Initializing NVMe Controllers 00:24:08.338 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:08.338 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:08.338 Initialization complete. Launching workers. 00:24:08.338 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74788, failed: 0 00:24:08.338 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32686, failed to submit 42102 00:24:08.338 success 0, unsuccessful 32686, failed 0 00:24:08.338 20:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:08.338 20:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:11.620 Initializing NVMe Controllers 00:24:11.620 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:11.620 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:11.620 Initialization complete. Launching workers. 00:24:11.620 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90259, failed: 0 00:24:11.620 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22580, failed to submit 67679 00:24:11.620 success 0, unsuccessful 22580, failed 0 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:11.620 20:52:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:12.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.711 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:14.711 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:14.711 00:24:14.711 real 0m13.960s 00:24:14.711 user 0m6.544s 00:24:14.711 sys 0m4.968s 00:24:14.711 20:52:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.711 20:52:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:14.711 ************************************ 00:24:14.711 END TEST kernel_target_abort 00:24:14.711 ************************************ 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.711 rmmod nvme_tcp 00:24:14.711 rmmod nvme_fabrics 00:24:14.711 rmmod nvme_keyring 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84961 ']' 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84961 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84961 ']' 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84961 00:24:14.711 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84961) - No such process 00:24:14.711 Process with pid 84961 is not found 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84961 is not found' 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:14.711 20:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:15.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:15.276 Waiting for block devices as requested 00:24:15.276 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.276 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:15.533 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:15.791 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:15.791 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.791 20:52:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:15.791 20:52:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.791 20:52:10 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:15.791 ************************************ 00:24:15.791 END TEST nvmf_abort_qd_sizes 00:24:15.791 ************************************ 00:24:15.791 00:24:15.791 real 0m28.979s 00:24:15.791 user 0m51.879s 00:24:15.791 sys 0m9.734s 00:24:15.791 20:52:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.791 20:52:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:15.791 20:52:10 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:15.791 20:52:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:15.791 20:52:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.791 20:52:10 -- common/autotest_common.sh@10 -- # set +x 00:24:15.791 ************************************ 00:24:15.791 START TEST keyring_file 00:24:15.791 ************************************ 00:24:15.791 20:52:10 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:15.791 * Looking for test storage... 00:24:15.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:15.791 20:52:10 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:15.791 20:52:10 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:24:15.791 20:52:10 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:16.050 20:52:10 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:16.050 20:52:10 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.050 20:52:10 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.050 --rc genhtml_branch_coverage=1 00:24:16.050 --rc genhtml_function_coverage=1 00:24:16.050 --rc genhtml_legend=1 00:24:16.050 --rc geninfo_all_blocks=1 00:24:16.050 --rc geninfo_unexecuted_blocks=1 00:24:16.050 00:24:16.050 ' 00:24:16.050 20:52:10 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.050 --rc genhtml_branch_coverage=1 00:24:16.050 --rc genhtml_function_coverage=1 00:24:16.050 --rc genhtml_legend=1 00:24:16.050 --rc geninfo_all_blocks=1 00:24:16.050 --rc geninfo_unexecuted_blocks=1 00:24:16.050 00:24:16.050 ' 00:24:16.050 20:52:10 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.050 --rc genhtml_branch_coverage=1 00:24:16.050 --rc genhtml_function_coverage=1 00:24:16.050 --rc genhtml_legend=1 00:24:16.050 --rc geninfo_all_blocks=1 00:24:16.050 --rc geninfo_unexecuted_blocks=1 00:24:16.050 00:24:16.050 ' 00:24:16.050 20:52:10 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.050 --rc genhtml_branch_coverage=1 00:24:16.050 --rc genhtml_function_coverage=1 00:24:16.050 --rc genhtml_legend=1 00:24:16.050 --rc geninfo_all_blocks=1 00:24:16.050 --rc geninfo_unexecuted_blocks=1 00:24:16.050 00:24:16.050 ' 00:24:16.050 20:52:10 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:16.050 20:52:10 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.050 20:52:10 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.050 20:52:10 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.050 20:52:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.050 20:52:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.050 20:52:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.050 20:52:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:16.050 20:52:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.051 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.A48fnwz3zo 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.A48fnwz3zo 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.A48fnwz3zo 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.A48fnwz3zo 00:24:16.051 20:52:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DgZTvuXD56 00:24:16.051 20:52:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:16.051 20:52:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:16.051 20:52:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DgZTvuXD56 00:24:16.051 20:52:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DgZTvuXD56 00:24:16.051 20:52:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DgZTvuXD56 00:24:16.051 20:52:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=85885 00:24:16.051 20:52:11 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:16.051 20:52:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85885 00:24:16.051 20:52:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85885 ']' 00:24:16.051 20:52:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.051 20:52:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.051 20:52:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.051 20:52:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.051 20:52:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:16.309 [2024-11-26 20:52:11.094787] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:16.310 [2024-11-26 20:52:11.094899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85885 ] 00:24:16.310 [2024-11-26 20:52:11.252555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.568 [2024-11-26 20:52:11.339108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.568 [2024-11-26 20:52:11.453106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:17.163 20:52:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.163 20:52:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:17.163 20:52:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:17.163 20:52:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.163 20:52:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:17.163 [2024-11-26 20:52:12.153555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.422 null0 00:24:17.422 [2024-11-26 20:52:12.185511] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.422 [2024-11-26 20:52:12.185731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.422 20:52:12 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:17.422 [2024-11-26 20:52:12.217508] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:17.422 request: 00:24:17.422 { 00:24:17.422 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.422 "secure_channel": false, 00:24:17.422 "listen_address": { 00:24:17.422 "trtype": "tcp", 00:24:17.422 "traddr": "127.0.0.1", 00:24:17.422 "trsvcid": "4420" 00:24:17.422 }, 00:24:17.422 "method": "nvmf_subsystem_add_listener", 00:24:17.422 "req_id": 1 00:24:17.422 } 00:24:17.422 Got JSON-RPC error response 00:24:17.422 response: 00:24:17.422 { 00:24:17.422 "code": -32602, 00:24:17.422 "message": "Invalid parameters" 00:24:17.422 } 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:17.422 20:52:12 keyring_file -- keyring/file.sh@47 -- # bperfpid=85902 00:24:17.422 20:52:12 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85902 /var/tmp/bperf.sock 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85902 ']' 00:24:17.422 20:52:12 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.422 20:52:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:17.422 [2024-11-26 20:52:12.285013] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:17.422 [2024-11-26 20:52:12.285413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85902 ] 00:24:17.680 [2024-11-26 20:52:12.441326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.680 [2024-11-26 20:52:12.519892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.680 [2024-11-26 20:52:12.572217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:18.615 20:52:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.615 20:52:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:18.615 20:52:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:18.615 20:52:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:18.615 20:52:13 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DgZTvuXD56 00:24:18.615 20:52:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DgZTvuXD56 00:24:18.873 20:52:13 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:18.873 20:52:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:18.873 20:52:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.873 20:52:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:18.873 20:52:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.131 20:52:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.A48fnwz3zo == \/\t\m\p\/\t\m\p\.\A\4\8\f\n\w\z\3\z\o ]] 00:24:19.131 20:52:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:19.131 20:52:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:19.131 20:52:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.131 20:52:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:19.131 20:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.389 20:52:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.DgZTvuXD56 == \/\t\m\p\/\t\m\p\.\D\g\Z\T\v\u\X\D\5\6 ]] 00:24:19.389 20:52:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:19.389 20:52:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:19.389 20:52:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:19.389 20:52:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.389 20:52:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:19.389 20:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.648 20:52:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:19.648 20:52:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:19.648 20:52:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:19.648 20:52:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:19.648 20:52:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.648 20:52:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:19.648 20:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.215 20:52:14 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:20.215 20:52:14 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.215 20:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.215 [2024-11-26 20:52:15.127046] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.215 nvme0n1 00:24:20.473 20:52:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:20.473 20:52:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:20.473 20:52:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:20.473 20:52:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:20.473 20:52:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:20.473 20:52:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.731 20:52:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:20.731 20:52:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:20.731 20:52:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:20.731 20:52:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:20.731 20:52:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:20.731 20:52:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.731 20:52:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:20.989 20:52:15 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:20.989 20:52:15 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.989 Running I/O for 1 seconds... 00:24:22.184 14579.00 IOPS, 56.95 MiB/s 00:24:22.184 Latency(us) 00:24:22.184 [2024-11-26T20:52:17.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.184 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:22.184 nvme0n1 : 1.01 14622.85 57.12 0.00 0.00 8732.77 3682.50 16602.45 00:24:22.184 [2024-11-26T20:52:17.177Z] =================================================================================================================== 00:24:22.184 [2024-11-26T20:52:17.177Z] Total : 14622.85 57.12 0.00 0.00 8732.77 3682.50 16602.45 00:24:22.184 { 00:24:22.184 "results": [ 00:24:22.184 { 00:24:22.184 "job": "nvme0n1", 00:24:22.184 "core_mask": "0x2", 00:24:22.184 "workload": "randrw", 00:24:22.184 "percentage": 50, 00:24:22.184 "status": "finished", 00:24:22.184 "queue_depth": 128, 00:24:22.184 "io_size": 4096, 00:24:22.184 "runtime": 1.005823, 00:24:22.184 "iops": 14622.851137824448, 00:24:22.184 "mibps": 57.12051225712675, 00:24:22.184 "io_failed": 0, 00:24:22.184 "io_timeout": 0, 00:24:22.184 "avg_latency_us": 8732.770982555654, 00:24:22.184 "min_latency_us": 3682.499047619048, 00:24:22.184 "max_latency_us": 16602.453333333335 00:24:22.184 } 00:24:22.184 ], 00:24:22.184 "core_count": 1 00:24:22.184 } 00:24:22.184 20:52:16 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:22.184 20:52:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:22.442 20:52:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.442 20:52:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:22.442 20:52:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.442 20:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:22.701 20:52:17 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:22.701 20:52:17 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:22.701 20:52:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:22.701 20:52:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:22.701 20:52:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:22.701 20:52:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.701 20:52:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:22.701 20:52:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.701 20:52:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:22.701 20:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:22.959 [2024-11-26 20:52:17.879694] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:22.959 [2024-11-26 20:52:17.880454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb55d0 (107): Transport endpoint is not connected 00:24:22.959 [2024-11-26 20:52:17.881443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb55d0 (9): Bad file descriptor 00:24:22.959 [2024-11-26 20:52:17.882442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:22.959 [2024-11-26 20:52:17.882466] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:22.959 [2024-11-26 20:52:17.882476] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:22.959 [2024-11-26 20:52:17.882489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:22.959 request: 00:24:22.959 { 00:24:22.959 "name": "nvme0", 00:24:22.959 "trtype": "tcp", 00:24:22.959 "traddr": "127.0.0.1", 00:24:22.959 "adrfam": "ipv4", 00:24:22.959 "trsvcid": "4420", 00:24:22.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:22.959 "prchk_reftag": false, 00:24:22.959 "prchk_guard": false, 00:24:22.959 "hdgst": false, 00:24:22.959 "ddgst": false, 00:24:22.959 "psk": "key1", 00:24:22.959 "allow_unrecognized_csi": false, 00:24:22.959 "method": "bdev_nvme_attach_controller", 00:24:22.959 "req_id": 1 00:24:22.959 } 00:24:22.959 Got JSON-RPC error response 00:24:22.959 response: 00:24:22.959 { 00:24:22.959 "code": -5, 00:24:22.959 "message": "Input/output error" 00:24:22.959 } 00:24:22.959 20:52:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:22.959 20:52:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.959 20:52:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.959 20:52:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.959 20:52:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:22.959 20:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:22.959 20:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:22.959 20:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.959 20:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:22.959 20:52:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.217 20:52:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:23.217 20:52:18 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:23.217 20:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:23.217 20:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:23.217 20:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:23.217 20:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.217 20:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:23.475 20:52:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:23.475 20:52:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:23.475 20:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:23.734 20:52:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:23.734 20:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:23.992 20:52:18 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:23.992 20:52:18 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:23.992 20:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:24.251 20:52:19 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:24.251 20:52:19 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.A48fnwz3zo 00:24:24.251 20:52:19 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:24.251 20:52:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:24.251 20:52:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:24.251 20:52:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:24.251 20:52:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.251 20:52:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:24.251 20:52:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.251 20:52:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:24.251 20:52:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:24.510 [2024-11-26 20:52:19.402685] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.A48fnwz3zo': 0100660 00:24:24.510 [2024-11-26 20:52:19.402753] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:24.510 request: 00:24:24.510 { 00:24:24.510 "name": "key0", 00:24:24.510 "path": "/tmp/tmp.A48fnwz3zo", 00:24:24.510 "method": "keyring_file_add_key", 00:24:24.510 "req_id": 1 00:24:24.510 } 00:24:24.510 Got JSON-RPC error response 00:24:24.510 response: 00:24:24.510 { 00:24:24.510 "code": -1, 00:24:24.510 "message": "Operation not permitted" 00:24:24.510 } 00:24:24.510 20:52:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:24.510 20:52:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:24.510 20:52:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:24.510 20:52:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:24.510 20:52:19 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.A48fnwz3zo 00:24:24.510 20:52:19 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:24.510 20:52:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.A48fnwz3zo 00:24:24.769 20:52:19 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.A48fnwz3zo 00:24:24.769 20:52:19 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:24:24.769 20:52:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:24.769 20:52:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:24.769 20:52:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:24.769 20:52:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:24.769 20:52:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:25.027 20:52:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:24:25.028 20:52:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:25.028 20:52:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:25.028 20:52:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:25.028 20:52:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:25.028 20:52:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.028 20:52:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:25.028 20:52:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.028 20:52:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:25.028 20:52:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:25.286 [2024-11-26 20:52:20.062838] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.A48fnwz3zo': No such file or directory 00:24:25.286 [2024-11-26 20:52:20.062901] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:25.286 [2024-11-26 20:52:20.062923] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:25.286 [2024-11-26 20:52:20.062932] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:24:25.286 [2024-11-26 20:52:20.062942] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:25.286 [2024-11-26 20:52:20.062951] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:25.286 request: 00:24:25.286 { 00:24:25.286 "name": "nvme0", 00:24:25.286 "trtype": "tcp", 00:24:25.286 "traddr": "127.0.0.1", 00:24:25.286 "adrfam": "ipv4", 00:24:25.286 "trsvcid": "4420", 00:24:25.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:25.286 "prchk_reftag": false, 00:24:25.286 "prchk_guard": false, 00:24:25.286 "hdgst": false, 00:24:25.286 "ddgst": false, 00:24:25.286 "psk": "key0", 00:24:25.286 "allow_unrecognized_csi": false, 00:24:25.286 "method": "bdev_nvme_attach_controller", 00:24:25.286 "req_id": 1 00:24:25.286 } 00:24:25.286 Got JSON-RPC error response 00:24:25.286 response: 00:24:25.286 { 00:24:25.286 "code": -19, 00:24:25.286 "message": "No such device" 00:24:25.286 } 00:24:25.286 20:52:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:25.286 20:52:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.286 20:52:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.286 20:52:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.286 20:52:20 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:24:25.286 20:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:25.544 20:52:20 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.I0meFUtOBf 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:25.544 20:52:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:25.544 20:52:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:25.544 20:52:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:25.544 20:52:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:25.544 20:52:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:25.544 20:52:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.I0meFUtOBf 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.I0meFUtOBf 00:24:25.544 20:52:20 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.I0meFUtOBf 00:24:25.544 20:52:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0meFUtOBf 00:24:25.544 20:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I0meFUtOBf 00:24:25.803 20:52:20 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:25.803 20:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:26.061 nvme0n1 00:24:26.061 20:52:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:24:26.061 20:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:26.061 20:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:26.061 20:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:26.061 20:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:26.061 20:52:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:26.319 20:52:21 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:24:26.319 20:52:21 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:24:26.319 20:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:26.577 20:52:21 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:24:26.577 20:52:21 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:24:26.577 20:52:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:26.577 20:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:26.577 20:52:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:26.836 20:52:21 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:24:26.836 20:52:21 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:24:26.836 20:52:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:26.836 20:52:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:26.836 20:52:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:26.836 20:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:26.836 20:52:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:27.094 20:52:21 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:24:27.094 20:52:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:27.094 20:52:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:27.352 20:52:22 keyring_file -- keyring/file.sh@105 -- # jq length 00:24:27.352 20:52:22 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:24:27.352 20:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:27.610 20:52:22 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:24:27.610 20:52:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0meFUtOBf 00:24:27.610 20:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I0meFUtOBf 00:24:27.868 20:52:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DgZTvuXD56 00:24:27.868 20:52:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DgZTvuXD56 00:24:28.125 20:52:23 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:28.125 20:52:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:28.691 nvme0n1 00:24:28.691 20:52:23 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:24:28.691 20:52:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:28.950 20:52:23 keyring_file -- keyring/file.sh@113 -- # config='{ 00:24:28.950 "subsystems": [ 00:24:28.950 { 00:24:28.950 "subsystem": "keyring", 00:24:28.950 "config": [ 00:24:28.950 { 00:24:28.950 "method": "keyring_file_add_key", 00:24:28.950 "params": { 00:24:28.950 "name": "key0", 00:24:28.950 "path": "/tmp/tmp.I0meFUtOBf" 00:24:28.950 } 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "method": "keyring_file_add_key", 00:24:28.950 "params": { 00:24:28.950 "name": "key1", 00:24:28.950 "path": "/tmp/tmp.DgZTvuXD56" 00:24:28.950 } 00:24:28.950 } 00:24:28.950 ] 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "subsystem": "iobuf", 00:24:28.950 "config": [ 00:24:28.950 { 00:24:28.950 "method": "iobuf_set_options", 00:24:28.950 "params": { 00:24:28.950 "small_pool_count": 8192, 00:24:28.950 "large_pool_count": 1024, 00:24:28.950 "small_bufsize": 8192, 00:24:28.950 "large_bufsize": 135168, 00:24:28.950 "enable_numa": false 00:24:28.950 } 00:24:28.950 } 00:24:28.950 ] 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "subsystem": "sock", 00:24:28.950 "config": [ 00:24:28.950 { 00:24:28.950 "method": "sock_set_default_impl", 00:24:28.950 "params": { 00:24:28.950 "impl_name": "uring" 00:24:28.950 } 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "method": "sock_impl_set_options", 00:24:28.950 "params": { 00:24:28.950 "impl_name": "ssl", 00:24:28.950 "recv_buf_size": 4096, 00:24:28.950 "send_buf_size": 4096, 00:24:28.950 "enable_recv_pipe": true, 00:24:28.950 "enable_quickack": false, 00:24:28.950 "enable_placement_id": 0, 00:24:28.950 "enable_zerocopy_send_server": true, 00:24:28.950 "enable_zerocopy_send_client": false, 00:24:28.950 "zerocopy_threshold": 0, 00:24:28.950 "tls_version": 0, 00:24:28.950 "enable_ktls": false 00:24:28.950 } 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "method": "sock_impl_set_options", 00:24:28.950 "params": { 00:24:28.950 "impl_name": "posix", 00:24:28.950 "recv_buf_size": 2097152, 00:24:28.950 "send_buf_size": 2097152, 00:24:28.950 "enable_recv_pipe": true, 00:24:28.950 "enable_quickack": false, 00:24:28.950 "enable_placement_id": 0, 00:24:28.950 "enable_zerocopy_send_server": true, 00:24:28.950 "enable_zerocopy_send_client": false, 00:24:28.950 "zerocopy_threshold": 0, 00:24:28.950 "tls_version": 0, 00:24:28.950 "enable_ktls": false 00:24:28.950 } 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "method": "sock_impl_set_options", 00:24:28.950 "params": { 00:24:28.950 "impl_name": "uring", 00:24:28.950 "recv_buf_size": 2097152, 00:24:28.950 "send_buf_size": 2097152, 00:24:28.950 "enable_recv_pipe": true, 00:24:28.950 "enable_quickack": false, 00:24:28.950 "enable_placement_id": 0, 00:24:28.950 "enable_zerocopy_send_server": false, 00:24:28.950 "enable_zerocopy_send_client": false, 00:24:28.950 "zerocopy_threshold": 0, 00:24:28.950 "tls_version": 0, 00:24:28.950 "enable_ktls": false 00:24:28.950 } 00:24:28.950 } 00:24:28.950 ] 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "subsystem": "vmd", 00:24:28.950 "config": [] 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "subsystem": "accel", 00:24:28.950 "config": [ 00:24:28.950 { 00:24:28.950 "method": "accel_set_options", 00:24:28.950 "params": { 00:24:28.950 "small_cache_size": 128, 00:24:28.950 "large_cache_size": 16, 00:24:28.950 "task_count": 2048, 00:24:28.950 "sequence_count": 2048, 00:24:28.950 "buf_count": 2048 00:24:28.950 } 00:24:28.950 } 00:24:28.950 ] 00:24:28.950 }, 00:24:28.950 { 00:24:28.951 "subsystem": "bdev", 00:24:28.951 "config": [ 00:24:28.951 { 00:24:28.951 "method": "bdev_set_options", 00:24:28.951 "params": { 00:24:28.951 "bdev_io_pool_size": 65535, 00:24:28.951 "bdev_io_cache_size": 256, 00:24:28.951 "bdev_auto_examine": true, 00:24:28.951 "iobuf_small_cache_size": 128, 00:24:28.951 "iobuf_large_cache_size": 16 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_raid_set_options", 00:24:28.951 "params": { 00:24:28.951 "process_window_size_kb": 1024, 00:24:28.951 "process_max_bandwidth_mb_sec": 0 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_iscsi_set_options", 00:24:28.951 "params": { 00:24:28.951 "timeout_sec": 30 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_nvme_set_options", 00:24:28.951 "params": { 00:24:28.951 "action_on_timeout": "none", 00:24:28.951 "timeout_us": 0, 00:24:28.951 "timeout_admin_us": 0, 00:24:28.951 "keep_alive_timeout_ms": 10000, 00:24:28.951 "arbitration_burst": 0, 00:24:28.951 "low_priority_weight": 0, 00:24:28.951 "medium_priority_weight": 0, 00:24:28.951 "high_priority_weight": 0, 00:24:28.951 "nvme_adminq_poll_period_us": 10000, 00:24:28.951 "nvme_ioq_poll_period_us": 0, 00:24:28.951 "io_queue_requests": 512, 00:24:28.951 "delay_cmd_submit": true, 00:24:28.951 "transport_retry_count": 4, 00:24:28.951 "bdev_retry_count": 3, 00:24:28.951 "transport_ack_timeout": 0, 00:24:28.951 "ctrlr_loss_timeout_sec": 0, 00:24:28.951 "reconnect_delay_sec": 0, 00:24:28.951 "fast_io_fail_timeout_sec": 0, 00:24:28.951 "disable_auto_failback": false, 00:24:28.951 "generate_uuids": false, 00:24:28.951 "transport_tos": 0, 00:24:28.951 "nvme_error_stat": false, 00:24:28.951 "rdma_srq_size": 0, 00:24:28.951 "io_path_stat": false, 00:24:28.951 "allow_accel_sequence": false, 00:24:28.951 "rdma_max_cq_size": 0, 00:24:28.951 "rdma_cm_event_timeout_ms": 0, 00:24:28.951 "dhchap_digests": [ 00:24:28.951 "sha256", 00:24:28.951 "sha384", 00:24:28.951 "sha512" 00:24:28.951 ], 00:24:28.951 "dhchap_dhgroups": [ 00:24:28.951 "null", 00:24:28.951 "ffdhe2048", 00:24:28.951 "ffdhe3072", 00:24:28.951 "ffdhe4096", 00:24:28.951 "ffdhe6144", 00:24:28.951 "ffdhe8192" 00:24:28.951 ] 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_nvme_attach_controller", 00:24:28.951 "params": { 00:24:28.951 "name": "nvme0", 00:24:28.951 "trtype": "TCP", 00:24:28.951 "adrfam": "IPv4", 00:24:28.951 "traddr": "127.0.0.1", 00:24:28.951 "trsvcid": "4420", 00:24:28.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.951 "prchk_reftag": false, 00:24:28.951 "prchk_guard": false, 00:24:28.951 "ctrlr_loss_timeout_sec": 0, 00:24:28.951 "reconnect_delay_sec": 0, 00:24:28.951 "fast_io_fail_timeout_sec": 0, 00:24:28.951 "psk": "key0", 00:24:28.951 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:28.951 "hdgst": false, 00:24:28.951 "ddgst": false, 00:24:28.951 "multipath": "multipath" 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_nvme_set_hotplug", 00:24:28.951 "params": { 00:24:28.951 "period_us": 100000, 00:24:28.951 "enable": false 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_wait_for_examine" 00:24:28.951 } 00:24:28.951 ] 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "subsystem": "nbd", 00:24:28.951 "config": [] 00:24:28.951 } 00:24:28.951 ] 00:24:28.951 }' 00:24:28.951 20:52:23 keyring_file -- keyring/file.sh@115 -- # killprocess 85902 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85902 ']' 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85902 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85902 00:24:28.951 killing process with pid 85902 00:24:28.951 Received shutdown signal, test time was about 1.000000 seconds 00:24:28.951 00:24:28.951 Latency(us) 00:24:28.951 [2024-11-26T20:52:23.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.951 [2024-11-26T20:52:23.944Z] =================================================================================================================== 00:24:28.951 [2024-11-26T20:52:23.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85902' 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@973 -- # kill 85902 00:24:28.951 20:52:23 keyring_file -- common/autotest_common.sh@978 -- # wait 85902 00:24:29.210 20:52:24 keyring_file -- keyring/file.sh@118 -- # bperfpid=86158 00:24:29.210 20:52:24 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86158 /var/tmp/bperf.sock 00:24:29.210 20:52:24 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86158 ']' 00:24:29.210 20:52:24 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:24:29.210 "subsystems": [ 00:24:29.210 { 00:24:29.210 "subsystem": "keyring", 00:24:29.210 "config": [ 00:24:29.210 { 00:24:29.210 "method": "keyring_file_add_key", 00:24:29.210 "params": { 00:24:29.210 "name": "key0", 00:24:29.210 "path": "/tmp/tmp.I0meFUtOBf" 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "keyring_file_add_key", 00:24:29.210 "params": { 00:24:29.210 "name": "key1", 00:24:29.210 "path": "/tmp/tmp.DgZTvuXD56" 00:24:29.210 } 00:24:29.210 } 00:24:29.210 ] 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "subsystem": "iobuf", 00:24:29.210 "config": [ 00:24:29.210 { 00:24:29.210 "method": "iobuf_set_options", 00:24:29.210 "params": { 00:24:29.210 "small_pool_count": 8192, 00:24:29.210 "large_pool_count": 1024, 00:24:29.210 "small_bufsize": 8192, 00:24:29.210 "large_bufsize": 135168, 00:24:29.210 "enable_numa": false 00:24:29.210 } 00:24:29.210 } 00:24:29.210 ] 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "subsystem": "sock", 00:24:29.210 "config": [ 00:24:29.210 { 00:24:29.210 "method": "sock_set_default_impl", 00:24:29.210 "params": { 00:24:29.210 "impl_name": "uring" 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "sock_impl_set_options", 00:24:29.210 "params": { 00:24:29.210 "impl_name": "ssl", 00:24:29.210 "recv_buf_size": 4096, 00:24:29.210 "send_buf_size": 4096, 00:24:29.210 "enable_recv_pipe": true, 00:24:29.210 "enable_quickack": false, 00:24:29.210 "enable_placement_id": 0, 00:24:29.210 "enable_zerocopy_send_server": true, 00:24:29.210 "enable_zerocopy_send_client": false, 00:24:29.210 "zerocopy_threshold": 0, 00:24:29.210 "tls_version": 0, 00:24:29.210 "enable_ktls": false 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "sock_impl_set_options", 00:24:29.210 "params": { 00:24:29.210 "impl_name": "posix", 00:24:29.210 "recv_buf_size": 2097152, 00:24:29.210 "send_buf_size": 2097152, 00:24:29.210 "enable_recv_pipe": true, 00:24:29.210 "enable_quickack": false, 00:24:29.210 "enable_placement_id": 0, 00:24:29.210 "enable_zerocopy_send_server": true, 00:24:29.210 "enable_zerocopy_send_client": false, 00:24:29.210 "zerocopy_threshold": 0, 00:24:29.210 "tls_version": 0, 00:24:29.210 "enable_ktls": false 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "sock_impl_set_options", 00:24:29.210 "params": { 00:24:29.210 "impl_name": "uring", 00:24:29.210 "recv_buf_size": 2097152, 00:24:29.210 "send_buf_size": 2097152, 00:24:29.210 "enable_recv_pipe": true, 00:24:29.210 "enable_quickack": false, 00:24:29.210 "enable_placement_id": 0, 00:24:29.210 "enable_zerocopy_send_server": false, 00:24:29.210 "enable_zerocopy_send_client": false, 00:24:29.210 "zerocopy_threshold": 0, 00:24:29.210 "tls_version": 0, 00:24:29.210 "enable_ktls": false 00:24:29.210 } 00:24:29.210 } 00:24:29.210 ] 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "subsystem": "vmd", 00:24:29.210 "config": [] 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "subsystem": "accel", 00:24:29.210 "config": [ 00:24:29.210 { 00:24:29.210 "method": "accel_set_options", 00:24:29.210 "params": { 00:24:29.210 "small_cache_size": 128, 00:24:29.210 "large_cache_size": 16, 00:24:29.210 "task_count": 2048, 00:24:29.210 "sequence_count": 2048, 00:24:29.210 "buf_count": 2048 00:24:29.210 } 00:24:29.210 } 00:24:29.210 ] 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "subsystem": "bdev", 00:24:29.210 "config": [ 00:24:29.210 { 00:24:29.210 "method": "bdev_set_options", 00:24:29.210 "params": { 00:24:29.210 "bdev_io_pool_size": 65535, 00:24:29.210 "bdev_io_cache_size": 256, 00:24:29.210 "bdev_auto_examine": true, 00:24:29.210 "iobuf_small_cache_size": 128, 00:24:29.210 "iobuf_large_cache_size": 16 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "bdev_raid_set_options", 00:24:29.210 "params": { 00:24:29.210 "process_window_size_kb": 1024, 00:24:29.210 "process_max_bandwidth_mb_sec": 0 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "bdev_iscsi_set_options", 00:24:29.210 "params": { 00:24:29.210 "timeout_sec": 30 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "bdev_nvme_set_options", 00:24:29.210 "params": { 00:24:29.210 "action_on_timeout": "none", 00:24:29.210 "timeout_us": 0, 00:24:29.210 "timeout_admin_us": 0, 00:24:29.210 "keep_alive_timeout_ms": 10000, 00:24:29.210 "arbitration_burst": 0, 00:24:29.210 "low_priority_weight": 0, 00:24:29.210 "medium_priority_weight": 0, 00:24:29.210 "high_priority_weight": 0, 00:24:29.210 "nvme_adminq_poll_period_us": 10000, 00:24:29.210 "nvme_ioq_poll_period_us": 0, 00:24:29.210 "io_queue_requests": 512, 00:24:29.210 "delay_cmd_submit": true, 00:24:29.210 "transport_retry_count": 4, 00:24:29.210 "bdev_retry_count": 3, 00:24:29.210 "transport_ack_timeout": 0, 00:24:29.210 "ctrlr_loss_timeout_sec": 0, 00:24:29.210 "reconnect_delay_sec": 0, 00:24:29.210 "fast_io_fail_timeout_sec": 0, 00:24:29.210 "disable_auto_failback": false, 00:24:29.210 "generate_uuids": false, 00:24:29.210 "transport_tos": 0, 00:24:29.210 "nvme_error_stat": false, 00:24:29.210 "rdma_srq_size": 0, 00:24:29.210 "io_path_stat": false, 00:24:29.210 "allow_accel_sequence": false, 00:24:29.210 "rdma_max_cq_size": 0, 00:24:29.210 "rdma_cm_event_timeout_ms": 0, 00:24:29.210 "dhchap_digests": [ 00:24:29.210 "sha256", 00:24:29.210 "sha384", 00:24:29.210 "sha512" 00:24:29.210 ], 00:24:29.210 "dhchap_dhgroups": [ 00:24:29.210 "null", 00:24:29.210 "ffdhe2048", 00:24:29.210 "ffdhe3072", 00:24:29.210 "ffdhe4096", 00:24:29.210 "ffdhe6144", 00:24:29.210 "ffdhe8192" 00:24:29.210 ] 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "bdev_nvme_attach_controller", 00:24:29.210 "params": { 00:24:29.210 "name": "nvme0", 00:24:29.210 "trtype": "TCP", 00:24:29.210 "adrfam": "IPv4", 00:24:29.210 "traddr": "127.0.0.1", 00:24:29.210 "trsvcid": "4420", 00:24:29.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:29.210 "prchk_reftag": false, 00:24:29.210 "prchk_guard": false, 00:24:29.210 "ctrlr_loss_timeout_sec": 0, 00:24:29.210 "reconnect_delay_sec": 0, 00:24:29.210 "fast_io_fail_timeout_sec": 0, 00:24:29.210 "psk": "key0", 00:24:29.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:29.210 "hdgst": false, 00:24:29.210 "ddgst": false, 00:24:29.210 "multipath": "multipath" 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "bdev_nvme_set_hotplug", 00:24:29.210 "params": { 00:24:29.210 "period_us": 100000, 00:24:29.210 "enable": false 00:24:29.210 } 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "method": "bdev_wait_for_examine" 00:24:29.210 } 00:24:29.210 ] 00:24:29.210 }, 00:24:29.210 { 00:24:29.210 "subsystem": "nbd", 00:24:29.210 "config": [] 00:24:29.210 } 00:24:29.210 ] 00:24:29.210 }' 00:24:29.210 20:52:24 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:29.211 20:52:24 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.211 20:52:24 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.211 20:52:24 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.211 20:52:24 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.211 20:52:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:29.211 [2024-11-26 20:52:24.109576] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:29.211 [2024-11-26 20:52:24.109690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86158 ] 00:24:29.469 [2024-11-26 20:52:24.268143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.469 [2024-11-26 20:52:24.346774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.728 [2024-11-26 20:52:24.481811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:29.728 [2024-11-26 20:52:24.539978] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.294 20:52:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.294 20:52:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:30.294 20:52:25 keyring_file -- keyring/file.sh@121 -- # jq length 00:24:30.294 20:52:25 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:24:30.294 20:52:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.551 20:52:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:30.551 20:52:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:24:30.551 20:52:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:30.551 20:52:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:30.551 20:52:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:30.551 20:52:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:30.551 20:52:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.809 20:52:25 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:24:30.809 20:52:25 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:24:30.809 20:52:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:30.809 20:52:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:30.809 20:52:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:30.809 20:52:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:30.809 20:52:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:31.378 20:52:26 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:24:31.378 20:52:26 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:24:31.378 20:52:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:31.378 20:52:26 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:24:31.378 20:52:26 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:24:31.378 20:52:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:31.378 20:52:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.I0meFUtOBf /tmp/tmp.DgZTvuXD56 00:24:31.378 20:52:26 keyring_file -- keyring/file.sh@20 -- # killprocess 86158 00:24:31.378 20:52:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86158 ']' 00:24:31.378 20:52:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86158 00:24:31.378 20:52:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86158 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:31.638 killing process with pid 86158 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86158' 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@973 -- # kill 86158 00:24:31.638 Received shutdown signal, test time was about 1.000000 seconds 00:24:31.638 00:24:31.638 Latency(us) 00:24:31.638 [2024-11-26T20:52:26.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.638 [2024-11-26T20:52:26.631Z] =================================================================================================================== 00:24:31.638 [2024-11-26T20:52:26.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@978 -- # wait 86158 00:24:31.638 20:52:26 keyring_file -- keyring/file.sh@21 -- # killprocess 85885 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85885 ']' 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85885 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.638 20:52:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85885 00:24:31.897 killing process with pid 85885 00:24:31.897 20:52:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.897 20:52:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.897 20:52:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85885' 00:24:31.897 20:52:26 keyring_file -- common/autotest_common.sh@973 -- # kill 85885 00:24:31.897 20:52:26 keyring_file -- common/autotest_common.sh@978 -- # wait 85885 00:24:32.155 00:24:32.155 real 0m16.497s 00:24:32.155 user 0m39.791s 00:24:32.155 sys 0m3.891s 00:24:32.155 ************************************ 00:24:32.155 END TEST keyring_file 00:24:32.155 ************************************ 00:24:32.155 20:52:27 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.155 20:52:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:32.414 20:52:27 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:24:32.414 20:52:27 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:32.414 20:52:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.414 20:52:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.414 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:24:32.414 ************************************ 00:24:32.414 START TEST keyring_linux 00:24:32.414 ************************************ 00:24:32.414 20:52:27 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:32.414 Joined session keyring: 226470579 00:24:32.414 * Looking for test storage... 00:24:32.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:32.414 20:52:27 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:32.414 20:52:27 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:24:32.414 20:52:27 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.414 20:52:27 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@345 -- # : 1 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:24:32.674 20:52:27 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.674 20:52:27 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.674 --rc genhtml_branch_coverage=1 00:24:32.674 --rc genhtml_function_coverage=1 00:24:32.674 --rc genhtml_legend=1 00:24:32.674 --rc geninfo_all_blocks=1 00:24:32.674 --rc geninfo_unexecuted_blocks=1 00:24:32.674 00:24:32.674 ' 00:24:32.674 20:52:27 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.674 --rc genhtml_branch_coverage=1 00:24:32.674 --rc genhtml_function_coverage=1 00:24:32.674 --rc genhtml_legend=1 00:24:32.674 --rc geninfo_all_blocks=1 00:24:32.674 --rc geninfo_unexecuted_blocks=1 00:24:32.674 00:24:32.674 ' 00:24:32.674 20:52:27 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.674 --rc genhtml_branch_coverage=1 00:24:32.674 --rc genhtml_function_coverage=1 00:24:32.674 --rc genhtml_legend=1 00:24:32.674 --rc geninfo_all_blocks=1 00:24:32.674 --rc geninfo_unexecuted_blocks=1 00:24:32.674 00:24:32.674 ' 00:24:32.674 20:52:27 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.674 --rc genhtml_branch_coverage=1 00:24:32.674 --rc genhtml_function_coverage=1 00:24:32.674 --rc genhtml_legend=1 00:24:32.674 --rc geninfo_all_blocks=1 00:24:32.674 --rc geninfo_unexecuted_blocks=1 00:24:32.674 00:24:32.674 ' 00:24:32.674 20:52:27 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:32.674 20:52:27 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b7a0101-ee75-44bd-b64f-b6a56d193f2b 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.674 20:52:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.674 20:52:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.674 20:52:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.674 20:52:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.674 20:52:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:32.674 20:52:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.674 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.674 20:52:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.674 20:52:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:32.674 20:52:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:32.674 20:52:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:32.674 20:52:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:32.674 20:52:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:32.674 20:52:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:32.675 20:52:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:32.675 /tmp/:spdk-test:key0 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:32.675 20:52:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:32.675 20:52:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:32.675 /tmp/:spdk-test:key1 00:24:32.675 20:52:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:32.675 20:52:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86285 00:24:32.675 20:52:27 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.675 20:52:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86285 00:24:32.675 20:52:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86285 ']' 00:24:32.675 20:52:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.675 20:52:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.675 20:52:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.675 20:52:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.675 20:52:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:32.675 [2024-11-26 20:52:27.643494] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:32.675 [2024-11-26 20:52:27.643606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86285 ] 00:24:32.933 [2024-11-26 20:52:27.803024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.933 [2024-11-26 20:52:27.884562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.191 [2024-11-26 20:52:27.989027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:33.758 20:52:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.758 20:52:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:33.758 20:52:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:33.758 20:52:28 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.758 20:52:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:33.758 [2024-11-26 20:52:28.696657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.758 null0 00:24:33.758 [2024-11-26 20:52:28.728618] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:33.758 [2024-11-26 20:52:28.729018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:33.758 20:52:28 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.016 20:52:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:34.016 406843534 00:24:34.016 20:52:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:34.016 295446180 00:24:34.017 20:52:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86304 00:24:34.017 20:52:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86304 /var/tmp/bperf.sock 00:24:34.017 20:52:28 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:34.017 20:52:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86304 ']' 00:24:34.017 20:52:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.017 20:52:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.017 20:52:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.017 20:52:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.017 20:52:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:34.017 [2024-11-26 20:52:28.816269] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:34.017 [2024-11-26 20:52:28.816851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86304 ] 00:24:34.017 [2024-11-26 20:52:28.976870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.279 [2024-11-26 20:52:29.054916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.844 20:52:29 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.844 20:52:29 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:34.844 20:52:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:34.844 20:52:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:35.102 20:52:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:35.102 20:52:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:35.360 [2024-11-26 20:52:30.303287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:35.618 20:52:30 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:35.618 20:52:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:35.876 [2024-11-26 20:52:30.655816] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:35.876 nvme0n1 00:24:35.876 20:52:30 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:35.876 20:52:30 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:35.876 20:52:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:35.876 20:52:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:35.876 20:52:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:35.876 20:52:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:36.134 20:52:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:36.134 20:52:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:36.134 20:52:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:36.134 20:52:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:36.134 20:52:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:36.134 20:52:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:36.134 20:52:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:36.516 20:52:31 keyring_linux -- keyring/linux.sh@25 -- # sn=406843534 00:24:36.516 20:52:31 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:36.516 20:52:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:36.516 20:52:31 keyring_linux -- keyring/linux.sh@26 -- # [[ 406843534 == \4\0\6\8\4\3\5\3\4 ]] 00:24:36.516 20:52:31 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 406843534 00:24:36.516 20:52:31 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:36.516 20:52:31 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:36.516 Running I/O for 1 seconds... 00:24:37.482 14982.00 IOPS, 58.52 MiB/s 00:24:37.482 Latency(us) 00:24:37.482 [2024-11-26T20:52:32.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.482 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:37.482 nvme0n1 : 1.01 14983.21 58.53 0.00 0.00 8504.68 6865.68 18849.40 00:24:37.482 [2024-11-26T20:52:32.475Z] =================================================================================================================== 00:24:37.482 [2024-11-26T20:52:32.475Z] Total : 14983.21 58.53 0.00 0.00 8504.68 6865.68 18849.40 00:24:37.482 { 00:24:37.483 "results": [ 00:24:37.483 { 00:24:37.483 "job": "nvme0n1", 00:24:37.483 "core_mask": "0x2", 00:24:37.483 "workload": "randread", 00:24:37.483 "status": "finished", 00:24:37.483 "queue_depth": 128, 00:24:37.483 "io_size": 4096, 00:24:37.483 "runtime": 1.008529, 00:24:37.483 "iops": 14983.208217116216, 00:24:37.483 "mibps": 58.52815709811022, 00:24:37.483 "io_failed": 0, 00:24:37.483 "io_timeout": 0, 00:24:37.483 "avg_latency_us": 8504.675105804348, 00:24:37.483 "min_latency_us": 6865.676190476191, 00:24:37.483 "max_latency_us": 18849.401904761904 00:24:37.483 } 00:24:37.483 ], 00:24:37.483 "core_count": 1 00:24:37.483 } 00:24:37.483 20:52:32 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:37.483 20:52:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:38.049 20:52:32 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:38.049 20:52:32 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:38.049 20:52:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:38.049 20:52:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:38.049 20:52:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:38.049 20:52:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:38.307 20:52:33 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:38.307 20:52:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:38.307 20:52:33 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:38.307 20:52:33 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:38.307 20:52:33 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:24:38.307 20:52:33 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:38.307 20:52:33 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:38.307 20:52:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.307 20:52:33 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:38.307 20:52:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.307 20:52:33 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:38.307 20:52:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:38.565 [2024-11-26 20:52:33.362356] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:38.565 [2024-11-26 20:52:33.363172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c015d0 (107): Transport endpoint is not connected 00:24:38.565 [2024-11-26 20:52:33.364149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c015d0 (9): Bad file descriptor 00:24:38.565 [2024-11-26 20:52:33.365144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:38.565 [2024-11-26 20:52:33.365300] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:38.565 [2024-11-26 20:52:33.365387] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:38.565 [2024-11-26 20:52:33.365501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:38.565 request: 00:24:38.565 { 00:24:38.565 "name": "nvme0", 00:24:38.565 "trtype": "tcp", 00:24:38.565 "traddr": "127.0.0.1", 00:24:38.565 "adrfam": "ipv4", 00:24:38.565 "trsvcid": "4420", 00:24:38.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:38.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:38.565 "prchk_reftag": false, 00:24:38.565 "prchk_guard": false, 00:24:38.565 "hdgst": false, 00:24:38.565 "ddgst": false, 00:24:38.565 "psk": ":spdk-test:key1", 00:24:38.565 "allow_unrecognized_csi": false, 00:24:38.565 "method": "bdev_nvme_attach_controller", 00:24:38.565 "req_id": 1 00:24:38.565 } 00:24:38.565 Got JSON-RPC error response 00:24:38.565 response: 00:24:38.565 { 00:24:38.565 "code": -5, 00:24:38.565 "message": "Input/output error" 00:24:38.565 } 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@33 -- # sn=406843534 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 406843534 00:24:38.565 1 links removed 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@33 -- # sn=295446180 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 295446180 00:24:38.565 1 links removed 00:24:38.565 20:52:33 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86304 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86304 ']' 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86304 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86304 00:24:38.565 killing process with pid 86304 00:24:38.565 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.565 00:24:38.565 Latency(us) 00:24:38.565 [2024-11-26T20:52:33.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.565 [2024-11-26T20:52:33.558Z] =================================================================================================================== 00:24:38.565 [2024-11-26T20:52:33.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86304' 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@973 -- # kill 86304 00:24:38.565 20:52:33 keyring_linux -- common/autotest_common.sh@978 -- # wait 86304 00:24:38.823 20:52:33 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86285 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86285 ']' 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86285 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86285 00:24:38.823 killing process with pid 86285 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86285' 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@973 -- # kill 86285 00:24:38.823 20:52:33 keyring_linux -- common/autotest_common.sh@978 -- # wait 86285 00:24:39.389 ************************************ 00:24:39.389 END TEST keyring_linux 00:24:39.389 ************************************ 00:24:39.389 00:24:39.389 real 0m6.998s 00:24:39.389 user 0m13.241s 00:24:39.389 sys 0m1.954s 00:24:39.389 20:52:34 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.389 20:52:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:39.389 20:52:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:39.389 20:52:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:39.389 20:52:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:39.389 20:52:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:39.389 20:52:34 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:39.389 20:52:34 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:39.389 20:52:34 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:39.389 20:52:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.389 20:52:34 -- common/autotest_common.sh@10 -- # set +x 00:24:39.389 20:52:34 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:39.389 20:52:34 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:39.389 20:52:34 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:39.389 20:52:34 -- common/autotest_common.sh@10 -- # set +x 00:24:41.922 INFO: APP EXITING 00:24:41.922 INFO: killing all VMs 00:24:41.922 INFO: killing vhost app 00:24:41.922 INFO: EXIT DONE 00:24:42.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:42.489 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:42.489 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:43.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:43.423 Cleaning 00:24:43.423 Removing: /var/run/dpdk/spdk0/config 00:24:43.423 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:43.423 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:43.423 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:43.423 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:43.423 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:43.423 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:43.423 Removing: /var/run/dpdk/spdk1/config 00:24:43.423 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:43.423 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:43.423 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:43.423 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:43.423 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:43.423 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:43.423 Removing: /var/run/dpdk/spdk2/config 00:24:43.423 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:43.423 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:43.423 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:43.423 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:43.423 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:43.423 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:43.423 Removing: /var/run/dpdk/spdk3/config 00:24:43.423 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:43.423 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:43.423 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:43.423 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:43.423 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:43.423 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:43.423 Removing: /var/run/dpdk/spdk4/config 00:24:43.423 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:43.423 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:43.423 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:43.423 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:43.423 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:43.423 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:43.423 Removing: /dev/shm/nvmf_trace.0 00:24:43.423 Removing: /dev/shm/spdk_tgt_trace.pid56885 00:24:43.423 Removing: /var/run/dpdk/spdk0 00:24:43.423 Removing: /var/run/dpdk/spdk1 00:24:43.423 Removing: /var/run/dpdk/spdk2 00:24:43.423 Removing: /var/run/dpdk/spdk3 00:24:43.423 Removing: /var/run/dpdk/spdk4 00:24:43.423 Removing: /var/run/dpdk/spdk_pid56732 00:24:43.423 Removing: /var/run/dpdk/spdk_pid56885 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57089 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57170 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57198 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57307 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57318 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57457 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57653 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57807 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57879 00:24:43.423 Removing: /var/run/dpdk/spdk_pid57956 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58055 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58140 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58173 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58203 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58278 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58370 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58816 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58868 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58912 00:24:43.423 Removing: /var/run/dpdk/spdk_pid58928 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59000 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59009 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59074 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59090 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59135 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59146 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59191 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59209 00:24:43.423 Removing: /var/run/dpdk/spdk_pid59345 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59381 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59459 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59793 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59805 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59841 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59855 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59876 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59896 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59914 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59935 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59954 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59973 00:24:43.681 Removing: /var/run/dpdk/spdk_pid59994 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60013 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60032 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60053 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60074 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60092 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60104 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60129 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60142 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60163 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60199 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60218 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60250 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60322 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60350 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60365 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60399 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60409 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60416 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60464 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60478 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60506 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60521 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60531 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60546 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60555 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60570 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60580 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60589 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60623 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60650 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60665 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60693 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60707 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60716 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60762 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60773 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60804 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60813 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60826 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60834 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60841 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60854 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60867 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60869 00:24:43.681 Removing: /var/run/dpdk/spdk_pid60951 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61004 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61117 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61150 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61195 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61215 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61237 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61257 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61293 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61310 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61392 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61415 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61462 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61557 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61613 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61642 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61747 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61794 00:24:43.681 Removing: /var/run/dpdk/spdk_pid61828 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62054 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62157 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62186 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62215 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62254 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62288 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62321 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62357 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62765 00:24:43.681 Removing: /var/run/dpdk/spdk_pid62805 00:24:43.681 Removing: /var/run/dpdk/spdk_pid63141 00:24:43.681 Removing: /var/run/dpdk/spdk_pid63626 00:24:43.939 Removing: /var/run/dpdk/spdk_pid63911 00:24:43.939 Removing: /var/run/dpdk/spdk_pid64807 00:24:43.939 Removing: /var/run/dpdk/spdk_pid65722 00:24:43.939 Removing: /var/run/dpdk/spdk_pid65846 00:24:43.939 Removing: /var/run/dpdk/spdk_pid65908 00:24:43.939 Removing: /var/run/dpdk/spdk_pid67351 00:24:43.939 Removing: /var/run/dpdk/spdk_pid67668 00:24:43.939 Removing: /var/run/dpdk/spdk_pid71444 00:24:43.939 Removing: /var/run/dpdk/spdk_pid71804 00:24:43.939 Removing: /var/run/dpdk/spdk_pid71914 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72049 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72084 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72107 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72141 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72241 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72378 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72537 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72618 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72813 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72889 00:24:43.939 Removing: /var/run/dpdk/spdk_pid72970 00:24:43.939 Removing: /var/run/dpdk/spdk_pid73330 00:24:43.939 Removing: /var/run/dpdk/spdk_pid73740 00:24:43.939 Removing: /var/run/dpdk/spdk_pid73741 00:24:43.939 Removing: /var/run/dpdk/spdk_pid73742 00:24:43.939 Removing: /var/run/dpdk/spdk_pid74021 00:24:43.939 Removing: /var/run/dpdk/spdk_pid74291 00:24:43.939 Removing: /var/run/dpdk/spdk_pid74682 00:24:43.939 Removing: /var/run/dpdk/spdk_pid74689 00:24:43.939 Removing: /var/run/dpdk/spdk_pid75014 00:24:43.939 Removing: /var/run/dpdk/spdk_pid75039 00:24:43.939 Removing: /var/run/dpdk/spdk_pid75053 00:24:43.939 Removing: /var/run/dpdk/spdk_pid75089 00:24:43.939 Removing: /var/run/dpdk/spdk_pid75096 00:24:43.939 Removing: /var/run/dpdk/spdk_pid75450 00:24:43.940 Removing: /var/run/dpdk/spdk_pid75499 00:24:43.940 Removing: /var/run/dpdk/spdk_pid75832 00:24:43.940 Removing: /var/run/dpdk/spdk_pid76031 00:24:43.940 Removing: /var/run/dpdk/spdk_pid76459 00:24:43.940 Removing: /var/run/dpdk/spdk_pid77006 00:24:43.940 Removing: /var/run/dpdk/spdk_pid77901 00:24:43.940 Removing: /var/run/dpdk/spdk_pid78543 00:24:43.940 Removing: /var/run/dpdk/spdk_pid78546 00:24:43.940 Removing: /var/run/dpdk/spdk_pid80595 00:24:43.940 Removing: /var/run/dpdk/spdk_pid80659 00:24:43.940 Removing: /var/run/dpdk/spdk_pid80713 00:24:43.940 Removing: /var/run/dpdk/spdk_pid80769 00:24:43.940 Removing: /var/run/dpdk/spdk_pid80875 00:24:43.940 Removing: /var/run/dpdk/spdk_pid80922 00:24:43.940 Removing: /var/run/dpdk/spdk_pid80975 00:24:43.940 Removing: /var/run/dpdk/spdk_pid81026 00:24:43.940 Removing: /var/run/dpdk/spdk_pid81389 00:24:43.940 Removing: /var/run/dpdk/spdk_pid82593 00:24:43.940 Removing: /var/run/dpdk/spdk_pid82728 00:24:43.940 Removing: /var/run/dpdk/spdk_pid82970 00:24:43.940 Removing: /var/run/dpdk/spdk_pid83584 00:24:43.940 Removing: /var/run/dpdk/spdk_pid83745 00:24:43.940 Removing: /var/run/dpdk/spdk_pid83903 00:24:43.940 Removing: /var/run/dpdk/spdk_pid84004 00:24:43.940 Removing: /var/run/dpdk/spdk_pid84181 00:24:43.940 Removing: /var/run/dpdk/spdk_pid84291 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85013 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85043 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85084 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85339 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85374 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85408 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85885 00:24:43.940 Removing: /var/run/dpdk/spdk_pid85902 00:24:43.940 Removing: /var/run/dpdk/spdk_pid86158 00:24:43.940 Removing: /var/run/dpdk/spdk_pid86285 00:24:43.940 Removing: /var/run/dpdk/spdk_pid86304 00:24:43.940 Clean 00:24:44.198 20:52:38 -- common/autotest_common.sh@1453 -- # return 0 00:24:44.198 20:52:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:44.198 20:52:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.198 20:52:38 -- common/autotest_common.sh@10 -- # set +x 00:24:44.198 20:52:39 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:44.198 20:52:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.198 20:52:39 -- common/autotest_common.sh@10 -- # set +x 00:24:44.198 20:52:39 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:44.198 20:52:39 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:44.198 20:52:39 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:44.198 20:52:39 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:44.198 20:52:39 -- spdk/autotest.sh@398 -- # hostname 00:24:44.198 20:52:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:44.455 geninfo: WARNING: invalid characters removed from testname! 00:25:11.002 20:53:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:14.281 20:53:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:16.182 20:53:11 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:18.760 20:53:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:21.292 20:53:15 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:23.194 20:53:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:25.725 20:53:20 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:25.725 20:53:20 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:25.725 20:53:20 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:25.725 20:53:20 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:25.725 20:53:20 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:25.725 20:53:20 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:25.725 + [[ -n 5258 ]] 00:25:25.725 + sudo kill 5258 00:25:25.733 [Pipeline] } 00:25:25.747 [Pipeline] // timeout 00:25:25.751 [Pipeline] } 00:25:25.764 [Pipeline] // stage 00:25:25.769 [Pipeline] } 00:25:25.782 [Pipeline] // catchError 00:25:25.790 [Pipeline] stage 00:25:25.793 [Pipeline] { (Stop VM) 00:25:25.804 [Pipeline] sh 00:25:26.081 + vagrant halt 00:25:30.284 ==> default: Halting domain... 00:25:36.856 [Pipeline] sh 00:25:37.135 + vagrant destroy -f 00:25:40.416 ==> default: Removing domain... 00:25:40.426 [Pipeline] sh 00:25:40.699 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:40.706 [Pipeline] } 00:25:40.719 [Pipeline] // stage 00:25:40.724 [Pipeline] } 00:25:40.738 [Pipeline] // dir 00:25:40.742 [Pipeline] } 00:25:40.755 [Pipeline] // wrap 00:25:40.760 [Pipeline] } 00:25:40.771 [Pipeline] // catchError 00:25:40.780 [Pipeline] stage 00:25:40.782 [Pipeline] { (Epilogue) 00:25:40.793 [Pipeline] sh 00:25:41.069 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:47.695 [Pipeline] catchError 00:25:47.697 [Pipeline] { 00:25:47.708 [Pipeline] sh 00:25:47.987 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:48.246 Artifacts sizes are good 00:25:48.254 [Pipeline] } 00:25:48.267 [Pipeline] // catchError 00:25:48.277 [Pipeline] archiveArtifacts 00:25:48.283 Archiving artifacts 00:25:48.451 [Pipeline] cleanWs 00:25:48.469 [WS-CLEANUP] Deleting project workspace... 00:25:48.469 [WS-CLEANUP] Deferred wipeout is used... 00:25:48.488 [WS-CLEANUP] done 00:25:48.490 [Pipeline] } 00:25:48.504 [Pipeline] // stage 00:25:48.509 [Pipeline] } 00:25:48.522 [Pipeline] // node 00:25:48.527 [Pipeline] End of Pipeline 00:25:48.552 Finished: SUCCESS